CWE VIEW: Hardware Design
This view organizes weaknesses around concepts that are frequently used or encountered in hardware design. Accordingly, this view can align closely with the perspectives of designers, manufacturers, educators, and assessment vendors. It provides a variety of categories that are intended to simplify navigation, browsing, and mapping.
The following graph shows the tree-like relationships between weaknesses that exist at different levels of abstraction. At the highest level, categories and pillars exist to group weaknesses. Categories (which are not technically weaknesses) are special CWE entries used to group weaknesses that share a common characteristic. Pillars are weaknesses that are described in the most abstract fashion. Below these top-level entries are weaknesses are varying levels of abstraction. Classes are still very abstract, typically independent of any specific language or technology. Base level weaknesses are used to present a more specific type of weakness. A variant is a weakness that is described at a very low level of detail, typically limited to a specific language or technology. A chain is a set of weaknesses that must be reachable consecutively in order to produce an exploitable vulnerability. While a composite is a set of weaknesses that must all be present simultaneously in order to produce an exploitable vulnerability. Show Details:
1194 - Hardware Design
![]() ![]() 1194 (Hardware Design) > 1195 (Manufacturing and Life Cycle Management Concerns) Weaknesses in this category are root-caused to defects that arise in the semiconductor-manufacturing process or during the life cycle and supply chain. ![]() ![]() 1194 (Hardware Design) > 1195 (Manufacturing and Life Cycle Management Concerns) > 1059 (Insufficient Technical Documentation) The product does not contain sufficient
technical or engineering documentation (whether on paper or
in electronic form) that contains descriptions of all the
relevant software/hardware elements of the product, such as
its usage, structure, architectural components, interfaces, design, implementation,
configuration, operation, etc. ![]() ![]() 1194 (Hardware Design) > 1195 (Manufacturing and Life Cycle Management Concerns) > 1248 (Semiconductor Defects in Hardware Logic with Security-Sensitive Implications) The security-sensitive hardware module contains semiconductor defects. ![]() ![]() 1194 (Hardware Design) > 1195 (Manufacturing and Life Cycle Management Concerns) > 1266 (Improper Scrubbing of Sensitive Data from Decommissioned Device) The product does not properly provide a capability for the product administrator to remove sensitive data at the time the product is decommissioned. A scrubbing capability could be missing, insufficient, or incorrect. ![]() ![]() 1194 (Hardware Design) > 1195 (Manufacturing and Life Cycle Management Concerns) > 1269 (Product Released in Non-Release Configuration) The product released to market is released in pre-production or manufacturing configuration. ![]() ![]() 1194 (Hardware Design) > 1195 (Manufacturing and Life Cycle Management Concerns) > 1273 (Device Unlock Credential Sharing) The credentials necessary for unlocking a device are shared across multiple parties and may expose sensitive information. ![]() ![]() 1194 (Hardware Design) > 1195 (Manufacturing and Life Cycle Management Concerns) > 1297 (Unprotected Confidential Information on Device is Accessible by OSAT Vendors) The product does not adequately protect confidential information on the device from being accessed by Outsourced Semiconductor Assembly and Test (OSAT) vendors. ![]() ![]() 1194 (Hardware Design) > 1196 (Security Flow Issues) Weaknesses in this category are related to improper design of full-system security flows, including but not limited to secure boot, secure update, and hardware-device attestation. ![]() ![]() 1194 (Hardware Design) > 1196 (Security Flow Issues) > 1190 (DMA Device Enabled Too Early in Boot Phase) The product enables a Direct Memory Access (DMA) capable device before the security configuration settings are established, which allows an attacker to extract data from or gain privileges on the product. ![]() ![]() 1194 (Hardware Design) > 1196 (Security Flow Issues) > 1193 (Power-On of Untrusted Execution Core Before Enabling Fabric Access Control) The product enables components that contain untrusted firmware before memory and fabric access controls have been enabled. ![]() ![]() 1194 (Hardware Design) > 1196 (Security Flow Issues) > 1264 (Hardware Logic with Insecure De-Synchronization between Control and Data Channels) The hardware logic for error handling and security checks can incorrectly forward data before the security check is complete. ![]() ![]() 1194 (Hardware Design) > 1196 (Security Flow Issues) > 1274 (Improper Access Control for Volatile Memory Containing Boot Code) The product conducts a secure-boot process that transfers bootloader code from Non-Volatile Memory (NVM) into Volatile Memory (VM), but it does not have sufficient access control or other protections for the Volatile Memory. ![]() ![]() 1194 (Hardware Design) > 1196 (Security Flow Issues) > 1283 (Mutable Attestation or Measurement Reporting Data) The register contents used for attestation or measurement reporting data to verify boot flow are modifiable by an adversary. ![]() ![]() 1194 (Hardware Design) > 1196 (Security Flow Issues) > 1310 (Missing Ability to Patch ROM Code) Missing an ability to patch ROM code may leave a System or System-on-Chip (SoC) in a vulnerable state. ![]() ![]() 1194 (Hardware Design) > 1196 (Security Flow Issues) > 1326 (Missing Immutable Root of Trust in Hardware) A missing immutable root of trust in the hardware results in the ability to bypass secure boot or execute untrusted or adversarial boot code. ![]() ![]() 1194 (Hardware Design) > 1196 (Security Flow Issues) > 1328 (Security Version Number Mutable to Older Versions) Security-version number in hardware is mutable, resulting in the ability to downgrade (roll-back) the boot firmware to vulnerable code versions. ![]() ![]() 1194 (Hardware Design) > 1197 (Integration Issues) Weaknesses in this category are those that arise due to integration of multiple hardware Intellectual Property (IP) cores, from System-on-a-Chip (SoC) subsystem interactions, or from hardware platform subsystem interactions. ![]() ![]() 1194 (Hardware Design) > 1197 (Integration Issues) > 1276 (Hardware Child Block Incorrectly Connected to Parent System) Signals between a hardware IP and the parent system design are incorrectly connected causing security risks. ![]() ![]() 1194 (Hardware Design) > 1198 (Privilege Separation and Access Control Issues) Weaknesses in this category are related to features and mechanisms providing hardware-based isolation and access control (e.g., identity, policy, locking control) of sensitive shared hardware resources such as registers and fuses. ![]() ![]() 1194 (Hardware Design) > 1198 (Privilege Separation and Access Control Issues) > 276 (Incorrect Default Permissions) During installation, installed file permissions are set to allow anyone to modify those files. ![]() ![]() 1194 (Hardware Design) > 1198 (Privilege Separation and Access Control Issues) > 441 (Unintended Proxy or Intermediary ('Confused Deputy')) The product receives a request, message, or directive from an upstream component, but the product does not sufficiently preserve the original source of the request before forwarding the request to an external actor that is outside of the product's control sphere. This causes the product to appear to be the source of the request, leading it to act as a proxy or other intermediary between the upstream component and the external actor.Confused Deputy ![]() ![]() 1194 (Hardware Design) > 1198 (Privilege Separation and Access Control Issues) > 1189 (Improper Isolation of Shared Resources on System-on-a-Chip (SoC)) The System-On-a-Chip (SoC) does not properly isolate shared resources between trusted and untrusted agents. ![]() ![]() 1194 (Hardware Design) > 1198 (Privilege Separation and Access Control Issues) > 1192 (System-on-Chip (SoC) Using Components without Unique, Immutable Identifiers) The System-on-Chip (SoC) does not have unique, immutable identifiers for each of its components. ![]() ![]() 1194 (Hardware Design) > 1198 (Privilege Separation and Access Control Issues) > 1220 (Insufficient Granularity of Access Control) The product implements access controls via a policy or other feature with the intention to disable or restrict accesses (reads and/or writes) to assets in a system from untrusted agents. However, implemented access controls lack required granularity, which renders the control policy too broad because it allows accesses from unauthorized agents to the security-sensitive assets. ![]() ![]() 1194 (Hardware Design) > 1198 (Privilege Separation and Access Control Issues) > 1222 (Insufficient Granularity of Address Regions Protected by Register Locks) The product defines a large address region protected from modification by the same register lock control bit. This results in a conflict between the functional requirement that some addresses need to be writable by software during operation and the security requirement that the system configuration lock bit must be set during the boot process. ![]() ![]() 1194 (Hardware Design) > 1198 (Privilege Separation and Access Control Issues) > 1242 (Inclusion of Undocumented Features or Chicken Bits) The device includes chicken bits or undocumented features that can create entry points for unauthorized actors. ![]() ![]() 1194 (Hardware Design) > 1198 (Privilege Separation and Access Control Issues) > 1260 (Improper Handling of Overlap Between Protected Memory Ranges) The product allows address regions to overlap, which can result in the bypassing of intended memory protection. ![]() ![]() 1194 (Hardware Design) > 1198 (Privilege Separation and Access Control Issues) > 1262 (Improper Access Control for Register Interface) The product uses memory-mapped I/O registers that act as an interface to hardware functionality from software, but there is improper access control to those registers. ![]() ![]() 1194 (Hardware Design) > 1198 (Privilege Separation and Access Control Issues) > 1267 (Policy Uses Obsolete Encoding) The product uses an obsolete encoding mechanism to implement access controls. ![]() ![]() 1194 (Hardware Design) > 1198 (Privilege Separation and Access Control Issues) > 1268 (Policy Privileges are not Assigned Consistently Between Control and Data Agents) The product's hardware-enforced access control for a particular resource improperly accounts for privilege discrepancies between control and write policies.
![]() ![]() 1194 (Hardware Design) > 1198 (Privilege Separation and Access Control Issues) > 1280 (Access Control Check Implemented After Asset is Accessed) A product's hardware-based access control check occurs after the asset has been accessed. ![]() ![]() 1194 (Hardware Design) > 1198 (Privilege Separation and Access Control Issues) > 1294 (Insecure Security Identifier Mechanism) The System-on-Chip (SoC) implements a Security Identifier mechanism to differentiate what actions are allowed or disallowed when a transaction originates from an entity. However, the Security Identifiers are not correctly implemented. ![]() ![]() 1194 (Hardware Design) > 1198 (Privilege Separation and Access Control Issues) > 1294 (Insecure Security Identifier Mechanism) > 1259 (Improper Restriction of Security Token Assignment) The System-On-A-Chip (SoC) implements a Security Token mechanism to differentiate what actions are allowed or disallowed when a transaction originates from an entity. However, the Security Tokens are improperly protected. ![]() ![]() 1194 (Hardware Design) > 1198 (Privilege Separation and Access Control Issues) > 1294 (Insecure Security Identifier Mechanism) > 1270 (Generation of Incorrect Security Tokens) The product implements a Security Token mechanism to differentiate what actions are allowed or disallowed when a transaction originates from an entity. However, the Security Tokens generated in the system are incorrect. ![]() ![]() 1194 (Hardware Design) > 1198 (Privilege Separation and Access Control Issues) > 1294 (Insecure Security Identifier Mechanism) > 1290 (Incorrect Decoding of Security Identifiers ) The product implements a decoding mechanism to decode certain bus-transaction signals to security identifiers. If the decoding is implemented incorrectly, then untrusted agents can now gain unauthorized access to the asset. ![]() ![]() 1194 (Hardware Design) > 1198 (Privilege Separation and Access Control Issues) > 1294 (Insecure Security Identifier Mechanism) > 1292 (Incorrect Conversion of Security Identifiers) The product implements a conversion mechanism to map certain bus-transaction signals to security identifiers. However, if the conversion is incorrectly implemented, untrusted agents can gain unauthorized access to the asset. ![]() ![]() 1194 (Hardware Design) > 1198 (Privilege Separation and Access Control Issues) > 1299 (Missing Protection Mechanism for Alternate Hardware Interface) The lack of protections on alternate paths to access
control-protected assets (such as unprotected shadow registers
and other external facing unguarded interfaces) allows an
attacker to bypass existing protections to the asset that are
only performed against the primary path. ![]() ![]() 1194 (Hardware Design) > 1198 (Privilege Separation and Access Control Issues) > 1302 (Missing Security Identifier) The product implements a security identifier mechanism to differentiate what actions are allowed or disallowed when a transaction originates from an entity. A transaction is sent without a security identifier. ![]() ![]() 1194 (Hardware Design) > 1198 (Privilege Separation and Access Control Issues) > 1303 (Non-Transparent Sharing of Microarchitectural Resources) Hardware structures shared across execution contexts (e.g., caches and branch predictors) can violate the expected architecture isolation between contexts. ![]() ![]() 1194 (Hardware Design) > 1198 (Privilege Separation and Access Control Issues) > 1314 (Missing Write Protection for Parametric Data Values) The device does not write-protect the parametric data values for sensors that scale the sensor value, allowing untrusted software to manipulate the apparent result and potentially damage hardware or cause operational failure. ![]() ![]() 1194 (Hardware Design) > 1198 (Privilege Separation and Access Control Issues) > 1318 (Missing Support for Security Features in On-chip Fabrics or Buses) On-chip fabrics or buses either do not support or are not configured to support privilege separation or other security features, such as access control. ![]() ![]() 1194 (Hardware Design) > 1198 (Privilege Separation and Access Control Issues) > 1334 (Unauthorized Error Injection Can Degrade Hardware Redundancy) An unauthorized agent can inject errors into a redundant block to deprive the system of redundancy or put the system in a degraded operating mode. ![]() ![]() 1194 (Hardware Design) > 1199 (General Circuit and Logic Design Concerns) Weaknesses in this category are related to hardware-circuit design and logic (e.g., CMOS transistors, finite state machines, and registers) as well as issues related to hardware description languages such as System Verilog and VHDL. ![]() ![]() 1194 (Hardware Design) > 1199 (General Circuit and Logic Design Concerns) > 1209 (Failure to Disable Reserved Bits) The reserved bits in a hardware design are not disabled prior to production. Typically, reserved bits are used for future capabilities and should not support any functional logic in the design. However, designers might covertly use these bits to debug or further develop new capabilities in production hardware. Adversaries with access to these bits will write to them in hopes of compromising hardware state. ![]() ![]() 1194 (Hardware Design) > 1199 (General Circuit and Logic Design Concerns) > 1221 (Incorrect Register Defaults or Module Parameters) Hardware description language code incorrectly defines register defaults or hardware IP parameters to insecure values. ![]() ![]() 1194 (Hardware Design) > 1199 (General Circuit and Logic Design Concerns) > 1223 (Race Condition for Write-Once Attributes) A write-once register in hardware design is programmable by an untrusted software component earlier than the trusted software component, resulting in a race condition issue. ![]() ![]() 1194 (Hardware Design) > 1199 (General Circuit and Logic Design Concerns) > 1224 (Improper Restriction of Write-Once Bit Fields) The hardware design control register "sticky bits" or write-once bit fields are improperly implemented, such that they can be reprogrammed by software. ![]() ![]() 1194 (Hardware Design) > 1199 (General Circuit and Logic Design Concerns) > 1231 (Improper Prevention of Lock Bit Modification) The product uses a trusted lock bit for restricting access to registers, address regions, or other resources, but the product does not prevent the value of the lock bit from being modified after it has been set. ![]() ![]() 1194 (Hardware Design) > 1199 (General Circuit and Logic Design Concerns) > 1232 (Improper Lock Behavior After Power State Transition) Register lock bit protection disables changes to system configuration once the bit is set. Some of the protected registers or lock bits become programmable after power state transitions (e.g., Entry and wake from low power sleep modes) causing the system configuration to be changeable. ![]() ![]() 1194 (Hardware Design) > 1199 (General Circuit and Logic Design Concerns) > 1233 (Security-Sensitive Hardware Controls with Missing Lock Bit Protection) The product uses a register lock bit protection mechanism, but it does not ensure that the lock bit prevents modification of system registers or controls that perform changes to important hardware system configuration. ![]() ![]() 1194 (Hardware Design) > 1199 (General Circuit and Logic Design Concerns) > 1234 (Hardware Internal or Debug Modes Allow Override of Locks) System configuration protection may be bypassed during debug mode. ![]() ![]() 1194 (Hardware Design) > 1199 (General Circuit and Logic Design Concerns) > 1245 (Improper Finite State Machines (FSMs) in Hardware Logic) Faulty finite state machines (FSMs) in the hardware logic allow an attacker to put the system in an undefined state, to cause a denial of service (DoS) or gain privileges on the victim's system. ![]() ![]() 1194 (Hardware Design) > 1199 (General Circuit and Logic Design Concerns) > 1250 (Improper Preservation of Consistency Between Independent Representations of Shared State) The product has or supports multiple distributed components or sub-systems that are each required to keep their own local copy of shared data - such as state or cache - but the product does not ensure that all local copies remain consistent with each other. ![]() ![]() 1194 (Hardware Design) > 1199 (General Circuit and Logic Design Concerns) > 1253 (Incorrect Selection of Fuse Values) The logic level used to set a system to a secure state relies on a fuse being unblown. An attacker can set the system to an insecure state merely by blowing the fuse. ![]() ![]() 1194 (Hardware Design) > 1199 (General Circuit and Logic Design Concerns) > 1254 (Incorrect Comparison Logic Granularity) The product's comparison logic is performed over a series of steps rather than across the entire string in one operation. If there is a comparison logic failure on one of these steps, the operation may be vulnerable to a timing attack that can result in the interception of the process for nefarious purposes. ![]() ![]() 1194 (Hardware Design) > 1199 (General Circuit and Logic Design Concerns) > 1261 (Improper Handling of Single Event Upsets) The hardware logic does not effectively handle when single-event upsets (SEUs) occur. ![]() ![]() 1194 (Hardware Design) > 1199 (General Circuit and Logic Design Concerns) > 1298 (Hardware Logic Contains Race Conditions) A race condition in the hardware logic results in undermining security guarantees of the system. ![]() ![]() 1194 (Hardware Design) > 1201 (Core and Compute Issues) Weaknesses in this category are typically associated with CPUs, Graphics, Vision, AI, FPGA, and microcontrollers. ![]() ![]() 1194 (Hardware Design) > 1201 (Core and Compute Issues) > 1252 (CPU Hardware Not Configured to Support Exclusivity of Write and Execute Operations) The CPU is not configured to provide hardware support for exclusivity of write and execute operations on memory. This allows an attacker to execute data from all of memory. ![]() ![]() 1194 (Hardware Design) > 1201 (Core and Compute Issues) > 1281 (Sequence of Processor Instructions Leads to Unexpected Behavior) Specific combinations of processor instructions lead to undesirable behavior such as locking the processor until a hard reset performed. ![]() ![]() 1194 (Hardware Design) > 1201 (Core and Compute Issues) > 1342 (Information Exposure through Microarchitectural State after Transient Execution) The processor does not properly clear microarchitectural state after incorrect microcode assists or speculative execution, resulting in transient execution. ![]() ![]() 1194 (Hardware Design) > 1202 (Memory and Storage Issues) Weaknesses in this category are typically associated with memory (e.g., DRAM, SRAM) and storage technologies (e.g., NAND Flash, OTP, EEPROM, and eMMC). ![]() ![]() 1194 (Hardware Design) > 1202 (Memory and Storage Issues) > 226 (Sensitive Information in Resource Not Removed Before Reuse) The product releases a resource such as memory or a file so that it can be made available for reuse, but it does not clear or "zeroize" the information contained in the resource before the product performs a critical state transition or makes the resource available for reuse by other entities. ![]() ![]() 1194 (Hardware Design) > 1202 (Memory and Storage Issues) > 226 (Sensitive Information in Resource Not Removed Before Reuse) > 1239 (Improper Zeroization of Hardware Register) The hardware product does not properly clear sensitive information from built-in registers when the user of the hardware block changes. ![]() ![]() 1194 (Hardware Design) > 1202 (Memory and Storage Issues) > 226 (Sensitive Information in Resource Not Removed Before Reuse) > 1342 (Information Exposure through Microarchitectural State after Transient Execution) The processor does not properly clear microarchitectural state after incorrect microcode assists or speculative execution, resulting in transient execution. ![]() ![]() 1194 (Hardware Design) > 1202 (Memory and Storage Issues) > 1246 (Improper Write Handling in Limited-write Non-Volatile Memories) The product does not implement or incorrectly implements wear leveling operations in limited-write non-volatile memories. ![]() ![]() 1194 (Hardware Design) > 1202 (Memory and Storage Issues) > 1251 (Mirrored Regions with Different Values) The product's architecture mirrors regions without ensuring that their contents always stay in sync. ![]() ![]() 1194 (Hardware Design) > 1202 (Memory and Storage Issues) > 1257 (Improper Access Control Applied to Mirrored or Aliased Memory Regions) Aliased or mirrored memory regions in hardware designs may have inconsistent read/write permissions enforced by the hardware. A possible result is that an untrusted agent is blocked from accessing a memory region but is not blocked from accessing the corresponding aliased memory region.
![]() ![]() 1194 (Hardware Design) > 1202 (Memory and Storage Issues) > 1282 (Assumed-Immutable Data is Stored in Writable Memory) Immutable data, such as a first-stage bootloader, device identifiers, and "write-once" configuration settings are stored in writable memory that can be re-programmed or updated in the field. ![]() ![]() 1194 (Hardware Design) > 1203 (Peripherals, On-chip Fabric, and Interface/IO Problems)
Weaknesses in this category are related to hardware
security problems that apply to peripheral devices, IO
interfaces, on-chip interconnects, network-on-chip (NoC),
and buses. For example, this category includes issues
related to design of hardware interconnect and/or protocols
such as PCIe, USB, SMBUS, general-purpose IO pins, and
user-input peripherals such as mouse and keyboard.
![]() ![]() 1194 (Hardware Design) > 1203 (Peripherals, On-chip Fabric, and Interface/IO Problems) > 1311 (Improper Translation of Security Attributes by Fabric Bridge) The bridge incorrectly translates security attributes from either trusted to untrusted or from untrusted to trusted when converting from one fabric protocol to another. ![]() ![]() 1194 (Hardware Design) > 1203 (Peripherals, On-chip Fabric, and Interface/IO Problems) > 1312 (Missing Protection for Mirrored Regions in On-Chip Fabric Firewall) The firewall in an on-chip fabric protects the main addressed region, but it does not protect any mirrored memory or memory-mapped-IO (MMIO) regions. ![]() ![]() 1194 (Hardware Design) > 1203 (Peripherals, On-chip Fabric, and Interface/IO Problems) > 1315 (Improper Setting of Bus Controlling Capability in Fabric End-point) The bus controller enables bits in the fabric end-point to allow responder devices to control transactions on the fabric. ![]() ![]() 1194 (Hardware Design) > 1203 (Peripherals, On-chip Fabric, and Interface/IO Problems) > 1316 (Fabric-Address Map Allows Programming of Unwarranted Overlaps of Protected and Unprotected Ranges) The address map of the on-chip fabric has protected and unprotected regions overlapping, allowing an attacker to bypass access control to the overlapping portion of the protected region. ![]() ![]() 1194 (Hardware Design) > 1203 (Peripherals, On-chip Fabric, and Interface/IO Problems) > 1317 (Improper Access Control in Fabric Bridge) The product uses a fabric bridge for transactions between two Intellectual Property (IP) blocks, but the bridge does not properly perform the expected privilege, identity, or other access control checks between those IP blocks. ![]() ![]() 1194 (Hardware Design) > 1203 (Peripherals, On-chip Fabric, and Interface/IO Problems) > 1331 (Improper Isolation of Shared Resources in Network On Chip (NoC)) The Network On Chip (NoC) does not isolate or incorrectly isolates its on-chip-fabric and internal resources such that they are shared between trusted and untrusted agents, creating timing channels. ![]() ![]() 1194 (Hardware Design) > 1205 (Security Primitives and Cryptography Issues) Weaknesses in this category are related to hardware implementations of cryptographic protocols and other hardware-security primitives such as physical unclonable functions (PUFs) and random number generators (RNGs). ![]() ![]() 1194 (Hardware Design) > 1205 (Security Primitives and Cryptography Issues) > 203 (Observable Discrepancy) The product behaves differently or sends different responses under different circumstances in a way that is observable to an unauthorized actor, which exposes security-relevant information about the state of the product, such as whether a particular operation was successful or not.Side Channel Attack ![]() ![]() 1194 (Hardware Design) > 1205 (Security Primitives and Cryptography Issues) > 203 (Observable Discrepancy) > 1300 (Improper Protection of Physical Side Channels) The device does not contain sufficient protection
mechanisms to prevent physical side channels from exposing
sensitive information due to patterns in physically observable
phenomena such as variations in power consumption,
electromagnetic emissions (EME), or acoustic emissions. ![]() ![]() 1194 (Hardware Design) > 1205 (Security Primitives and Cryptography Issues) > 325 (Missing Cryptographic Step) The product does not implement a required step in a cryptographic algorithm, resulting in weaker encryption than advertised by the algorithm. ![]() ![]() 1194 (Hardware Design) > 1205 (Security Primitives and Cryptography Issues) > 1240 (Use of a Cryptographic Primitive with a Risky Implementation) To fulfill the need for a cryptographic primitive, the product implements a cryptographic algorithm using a non-standard, unproven, or disallowed/non-compliant cryptographic implementation. ![]() ![]() 1194 (Hardware Design) > 1205 (Security Primitives and Cryptography Issues) > 1241 (Use of Predictable Algorithm in Random Number Generator) The device uses an algorithm that is predictable and generates a pseudo-random number. ![]() ![]() 1194 (Hardware Design) > 1205 (Security Primitives and Cryptography Issues) > 1279 (Cryptographic Operations are run Before Supporting Units are Ready) Performing cryptographic operations without ensuring that the supporting inputs are ready to supply valid data may compromise the cryptographic result. ![]() ![]() 1194 (Hardware Design) > 1205 (Security Primitives and Cryptography Issues) > 1351 (Improper Handling of Hardware Behavior in Exceptionally Cold Environments) A hardware device, or the firmware running on it, is
missing or has incorrect protection features to maintain
goals of security primitives when the device is cooled below
standard operating temperatures. ![]() ![]() 1194 (Hardware Design) > 1206 (Power, Clock, Thermal, and Reset Concerns) Weaknesses in this category are related to system power, voltage, current, temperature, clocks, system state saving/restoring, and resets at the platform and SoC level. ![]() ![]() 1194 (Hardware Design) > 1206 (Power, Clock, Thermal, and Reset Concerns) > 1232 (Improper Lock Behavior After Power State Transition) Register lock bit protection disables changes to system configuration once the bit is set. Some of the protected registers or lock bits become programmable after power state transitions (e.g., Entry and wake from low power sleep modes) causing the system configuration to be changeable. ![]() ![]() 1194 (Hardware Design) > 1206 (Power, Clock, Thermal, and Reset Concerns) > 1247 (Improper Protection Against Voltage and Clock Glitches) The device does not contain or contains incorrectly implemented circuitry or sensors to detect and mitigate voltage and clock glitches and protect sensitive information or software contained on the device. ![]() ![]() 1194 (Hardware Design) > 1206 (Power, Clock, Thermal, and Reset Concerns) > 1248 (Semiconductor Defects in Hardware Logic with Security-Sensitive Implications) The security-sensitive hardware module contains semiconductor defects. ![]() ![]() 1194 (Hardware Design) > 1206 (Power, Clock, Thermal, and Reset Concerns) > 1255 (Comparison Logic is Vulnerable to Power Side-Channel Attacks) A device's real time power consumption may be monitored during security token evaluation and the information gleaned may be used to determine the value of the reference token. ![]() ![]() 1194 (Hardware Design) > 1206 (Power, Clock, Thermal, and Reset Concerns) > 1256 (Improper Restriction of Software Interfaces to Hardware Features) The product provides software-controllable
device functionality for capabilities such as power and
clock management, but it does not properly limit
functionality that can lead to modification of
hardware memory or register bits, or the ability to
observe physical side channels. ![]() ![]() 1194 (Hardware Design) > 1206 (Power, Clock, Thermal, and Reset Concerns) > 1271 (Uninitialized Value on Reset for Registers Holding Security Settings) Security-critical logic is not set to a known value on reset. ![]() ![]() 1194 (Hardware Design) > 1206 (Power, Clock, Thermal, and Reset Concerns) > 1304 (Improperly Preserved Integrity of Hardware Configuration State During a Power Save/Restore Operation) The product performs a power save/restore
operation, but it does not ensure that the integrity of
the configuration state is maintained and/or verified between
the beginning and ending of the operation. ![]() ![]() 1194 (Hardware Design) > 1206 (Power, Clock, Thermal, and Reset Concerns) > 1314 (Missing Write Protection for Parametric Data Values) The device does not write-protect the parametric data values for sensors that scale the sensor value, allowing untrusted software to manipulate the apparent result and potentially damage hardware or cause operational failure. ![]() ![]() 1194 (Hardware Design) > 1206 (Power, Clock, Thermal, and Reset Concerns) > 1320 (Improper Protection for Outbound Error Messages and Alert Signals) Untrusted agents can disable alerts about signal conditions exceeding limits or the response mechanism that handles such alerts.
![]() ![]() 1194 (Hardware Design) > 1206 (Power, Clock, Thermal, and Reset Concerns) > 1332 (Improper Handling of Faults that Lead to Instruction Skips) The device is missing or incorrectly implements circuitry or sensors that detect and mitigate the skipping of security-critical CPU instructions when they occur. ![]() ![]() 1194 (Hardware Design) > 1206 (Power, Clock, Thermal, and Reset Concerns) > 1338 (Improper Protections Against Hardware Overheating) A hardware device is missing or has inadequate protection features to prevent overheating. ![]() ![]() 1194 (Hardware Design) > 1207 (Debug and Test Problems) Weaknesses in this category are related to hardware debug and test interfaces such as JTAG and scan chain. ![]() ![]() 1194 (Hardware Design) > 1207 (Debug and Test Problems) > 1191 (On-Chip Debug and Test Interface With Improper Access Control) The chip does not implement or does not correctly perform access control to check whether users are authorized to access internal registers and test modes through the physical debug/test interface. ![]() ![]() 1194 (Hardware Design) > 1207 (Debug and Test Problems) > 1234 (Hardware Internal or Debug Modes Allow Override of Locks) System configuration protection may be bypassed during debug mode. ![]() ![]() 1194 (Hardware Design) > 1207 (Debug and Test Problems) > 1243 (Sensitive Non-Volatile Information Not Protected During Debug) Access to security-sensitive information stored in fuses is not limited during debug. ![]() ![]() 1194 (Hardware Design) > 1207 (Debug and Test Problems) > 1244 (Internal Asset Exposed to Unsafe Debug Access Level or State) The product uses physical debug or test
interfaces with support for multiple access levels, but it
assigns the wrong debug access level to an internal asset,
providing unintended access to the asset from untrusted debug
agents. ![]() ![]() 1194 (Hardware Design) > 1207 (Debug and Test Problems) > 1258 (Exposure of Sensitive System Information Due to Uncleared Debug Information) The hardware does not fully clear security-sensitive values, such as keys and intermediate values in cryptographic operations, when debug mode is entered. ![]() ![]() 1194 (Hardware Design) > 1207 (Debug and Test Problems) > 1272 (Sensitive Information Uncleared Before Debug/Power State Transition) The product performs a power or debug state transition, but it does not clear sensitive information that should no longer be accessible due to changes to information access restrictions. ![]() ![]() 1194 (Hardware Design) > 1207 (Debug and Test Problems) > 1291 (Public Key Re-Use for Signing both Debug and Production Code) The same public key is used for signing both debug and production code. ![]() ![]() 1194 (Hardware Design) > 1207 (Debug and Test Problems) > 1295 (Debug Messages Revealing Unnecessary Information) The product fails to adequately prevent the revealing of unnecessary and potentially sensitive system information within debugging messages. ![]() ![]() 1194 (Hardware Design) > 1207 (Debug and Test Problems) > 1296 (Incorrect Chaining or Granularity of Debug Components) The product's debug components contain incorrect chaining or granularity of debug components. ![]() ![]() 1194 (Hardware Design) > 1207 (Debug and Test Problems) > 1313 (Hardware Allows Activation of Test or Debug Logic at Runtime) During runtime, the hardware allows for test or debug logic (feature) to be activated, which allows for changing the state of the hardware. This feature can alter the intended behavior of the system and allow for alteration and leakage of sensitive data by an adversary. ![]() ![]() 1194 (Hardware Design) > 1207 (Debug and Test Problems) > 1323 (Improper Management of Sensitive Trace Data) Trace data collected from several sources on the
System-on-Chip (SoC) is stored in unprotected locations or
transported to untrusted agents. ![]() ![]() 1194 (Hardware Design) > 1207 (Debug and Test Problems) > 319 (Cleartext Transmission of Sensitive Information) The product transmits sensitive or security-critical data in cleartext in a communication channel that can be sniffed by unauthorized actors. ![]() ![]() 1194 (Hardware Design) > 1208 (Cross-Cutting Problems) Weaknesses in this category can arise in multiple areas of hardware design or can apply to a wide cross-section of components. ![]() ![]() 1194 (Hardware Design) > 1208 (Cross-Cutting Problems) > 440 (Expected Behavior Violation) A feature, API, or function does not perform according to its specification. ![]() ![]() 1194 (Hardware Design) > 1208 (Cross-Cutting Problems) > 1053 (Missing Documentation for Design) The product does not have documentation that represents how it is designed. ![]() ![]() 1194 (Hardware Design) > 1208 (Cross-Cutting Problems) > 1059 (Insufficient Technical Documentation) The product does not contain sufficient
technical or engineering documentation (whether on paper or
in electronic form) that contains descriptions of all the
relevant software/hardware elements of the product, such as
its usage, structure, architectural components, interfaces, design, implementation,
configuration, operation, etc. ![]() ![]() 1194 (Hardware Design) > 1208 (Cross-Cutting Problems) > 1263 (Improper Physical Access Control) The product is designed with access restricted to certain information, but it does not sufficiently protect against an unauthorized actor with physical access to these areas. ![]() ![]() 1194 (Hardware Design) > 1208 (Cross-Cutting Problems) > 1277 (Firmware Not Updateable) The product does not provide its
users with the ability to update or patch its
firmware to address any vulnerabilities or
weaknesses that may be present. ![]() ![]() 1194 (Hardware Design) > 1208 (Cross-Cutting Problems) > 1301 (Insufficient or Incomplete Data Removal within Hardware Component) The product's data removal process does not completely delete all data and potentially sensitive information within hardware components. ![]() ![]() 1194 (Hardware Design) > 1208 (Cross-Cutting Problems) > 1301 (Insufficient or Incomplete Data Removal within Hardware Component) > 1330 (Remanent Data Readable after Memory Erase) Confidential information stored in memory circuits is readable or recoverable after being cleared or erased. ![]() ![]() 1194 (Hardware Design) > 1208 (Cross-Cutting Problems) > 1329 (Reliance on Component That is Not Updateable) The product contains a component that cannot be updated or patched in order to remove vulnerabilities or significant bugs. ![]() ![]() 1194 (Hardware Design) > 1208 (Cross-Cutting Problems) > 1357 (Reliance on Insufficiently Trustworthy Component) The product is built from multiple separate components, but it uses a component that is not sufficiently trusted to meet expectations for security, reliability, updateability, and maintainability. ![]() ![]() 1194 (Hardware Design) > 1388 (Physical Access Issues and Concerns) Weaknesses in this category are related to concerns of physical access. ![]() ![]() 1194 (Hardware Design) > 1388 (Physical Access Issues and Concerns) > 1384 (Improper Handling of Physical or Environmental Conditions) The product does not properly handle unexpected physical or environmental conditions that occur naturally or are artificially induced. ![]() ![]() 1194 (Hardware Design) > 1388 (Physical Access Issues and Concerns) > 1319 (Improper Protection against Electromagnetic Fault Injection (EM-FI)) The device is susceptible to electromagnetic fault injection attacks, causing device internal information to be compromised or security mechanisms to be bypassed. ![]() ![]() 1194 (Hardware Design) > 1388 (Physical Access Issues and Concerns) > 1247 (Improper Protection Against Voltage and Clock Glitches) The device does not contain or contains incorrectly implemented circuitry or sensors to detect and mitigate voltage and clock glitches and protect sensitive information or software contained on the device. ![]() ![]() 1194 (Hardware Design) > 1388 (Physical Access Issues and Concerns) > 1261 (Improper Handling of Single Event Upsets) The hardware logic does not effectively handle when single-event upsets (SEUs) occur. ![]() ![]() 1194 (Hardware Design) > 1388 (Physical Access Issues and Concerns) > 1332 (Improper Handling of Faults that Lead to Instruction Skips) The device is missing or incorrectly implements circuitry or sensors that detect and mitigate the skipping of security-critical CPU instructions when they occur. ![]() ![]() 1194 (Hardware Design) > 1388 (Physical Access Issues and Concerns) > 1351 (Improper Handling of Hardware Behavior in Exceptionally Cold Environments) A hardware device, or the firmware running on it, is
missing or has incorrect protection features to maintain
goals of security primitives when the device is cooled below
standard operating temperatures. ![]() ![]() 1194 (Hardware Design) > 1388 (Physical Access Issues and Concerns) > 1278 (Missing Protection Against Hardware Reverse Engineering Using Integrated Circuit (IC) Imaging Techniques) Information stored in hardware may be recovered by an attacker with the capability to capture and analyze images of the integrated circuit using techniques such as scanning electron microscopy. ![]() ![]() 1194 (Hardware Design) > 1388 (Physical Access Issues and Concerns) > 1255 (Comparison Logic is Vulnerable to Power Side-Channel Attacks) A device's real time power consumption may be monitored during security token evaluation and the information gleaned may be used to determine the value of the reference token. ![]() ![]() 1194 (Hardware Design) > 1388 (Physical Access Issues and Concerns) > 1300 (Improper Protection of Physical Side Channels) The device does not contain sufficient protection
mechanisms to prevent physical side channels from exposing
sensitive information due to patterns in physically observable
phenomena such as variations in power consumption,
electromagnetic emissions (EME), or acoustic emissions. ![]() ![]() 1194 (Hardware Design) > 1388 (Physical Access Issues and Concerns) > 1248 (Semiconductor Defects in Hardware Logic with Security-Sensitive Implications) The security-sensitive hardware module contains semiconductor defects. Other The top level categories in this view represent commonly understood areas/terms within hardware design, and are meant to aid the user in identifying potential related weaknesses. It is possible for the same weakness to exist within multiple different categories. Other This view attempts to present weaknesses in a simple and intuitive way. As such it targets a single level of abstraction. It is important to realize that not every CWE will be represented in this view. High-level class weaknesses and low-level variant weaknesses are mostly ignored. However, by exploring the weaknesses that are included, and following the defined relationships, one can find these higher and lower level weaknesses.
View ComponentsA | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z
CWE-1280: Access Control Check Implemented After Asset is Accessed
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterA product's hardware-based access control check occurs after the asset has been accessed. The product implements a hardware-based access control check. The asset should be accessible only after the check is successful. If, however, this operation is not atomic and the asset is accessed before the check is complete, the security of the system may be compromised. ![]() ![]()
![]() ![]()
![]()
![]() Languages Verilog (Undetermined Prevalence) VHDL (Undetermined Prevalence) Class: Not Language-Specific (Undetermined Prevalence) Operating Systems Class: Not OS-Specific (Undetermined Prevalence) Architectures Class: Not Architecture-Specific (Undetermined Prevalence) Technologies Class: Not Technology-Specific (Undetermined Prevalence) ![]()
Example 1 Assume that the module foo_bar implements a protected register. The register content is the asset. Only transactions made by user id (indicated by signal usr_id) 0x4 are allowed to modify the register contents. The signal grant_access is used to provide access. (bad code) Example Language: Verilog
module foo_bar(data_out, usr_id, data_in, clk, rst_n); output reg [7:0] data_out; input wire [2:0] usr_id; input wire [7:0] data_in; input wire clk, rst_n; wire grant_access; always @ (posedge clk or negedge rst_n) begin
if (!rst_n)
end
data_out = 0;
else
data_out = (grant_access) ? data_in : data_out;
assign grant_access = (usr_id == 3'h4) ? 1'b1 : 1'b0; endmodule This code uses Verilog blocking assignments for data_out and grant_access. Therefore, these assignments happen sequentially (i.e., data_out is updated to new value first, and grant_access is updated the next cycle) and not in parallel. Therefore, the asset data_out is allowed to be modified even before the access control check is complete and grant_access signal is set. Since grant_access does not have a reset value, it will be meta-stable and will randomly go to either 0 or 1. Flipping the order of the assignment of data_out and grant_access should solve the problem. The correct snippet of code is shown below. (good code) Example Language: Verilog
always @ (posedge clk or negedge rst_n) begin
if (!rst_n)
end
data_out = 0;
else
assign grant_access = (usr_id == 3'h4) ? 1'b1 : 1'b0;
data_out = (grant_access) ? data_in : data_out; endmodule
![]()
CWE-1282: Assumed-Immutable Data is Stored in Writable Memory
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterImmutable data, such as a first-stage bootloader, device identifiers, and "write-once" configuration settings are stored in writable memory that can be re-programmed or updated in the field. Security services such as secure boot, authentication of code and data, and device attestation all require assets such as the first stage bootloader, public keys, golden hash digests, etc. which are implicitly trusted. Storing these assets in read-only memory (ROM), fuses, or one-time programmable (OTP) memory provides strong integrity guarantees and provides a root of trust for securing the rest of the system. Security is lost if assets assumed to be immutable can be modified. ![]() ![]()
![]() ![]()
![]()
![]() Languages Class: Not Language-Specific (Undetermined Prevalence) Operating Systems Class: Not OS-Specific (Undetermined Prevalence) Architectures Class: Not Architecture-Specific (Undetermined Prevalence) Technologies Class: Not Technology-Specific (Undetermined Prevalence) ![]()
Example 1 Cryptographic hash functions are commonly used to create unique fixed-length digests used to ensure the integrity of code and keys. A golden digest is stored on the device and compared to the digest computed from the data to be verified. If the digests match, the data has not been maliciously modified. If an attacker can modify the golden digest they then have the ability to store arbitrary data that passes the verification check. Hash digests used to verify public keys and early stage boot code should be immutable, with the strongest protection offered by hardware immutability.
![]()
CWE-319: Cleartext Transmission of Sensitive Information
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe product transmits sensitive or security-critical data in cleartext in a communication channel that can be sniffed by unauthorized actors. Many communication channels can be "sniffed" (monitored) by adversaries during data transmission. For example, in networking, packets can traverse many intermediary nodes from the source to the destination, whether across the internet, an internal network, the cloud, etc. Some actors might have privileged access to a network interface or any link along the channel, such as a router, but they might not be authorized to collect the underlying data. As a result, network traffic could be sniffed by adversaries, spilling security-critical data. Applicable communication channels are not limited to software products. Applicable channels include hardware-specific technologies such as internal hardware networks and external debug channels, supporting remote JTAG debugging. When mitigations are not applied to combat adversaries within the product's threat model, this weakness significantly lowers the difficulty of exploitation by such adversaries. When full communications are recorded or logged, such as with a packet dump, an adversary could attempt to obtain the dump long after the transmission has occurred and try to "sniff" the cleartext from the recorded communications in the dump itself. ![]() ![]()
![]() ![]()
![]() ![]()
![]() ![]()
![]() ![]()
![]()
![]() Languages Class: Not Language-Specific (Undetermined Prevalence) Technologies Class: Cloud Computing (Undetermined Prevalence) Class: Mobile (Undetermined Prevalence) Class: ICS/OT (Often Prevalent) Class: System on Chip (Undetermined Prevalence) Test/Debug Hardware (Often Prevalent) ![]()
Example 1 The following code attempts to establish a connection to a site to communicate sensitive information. (bad code) Example Language: Java try {
URL u = new URL("http://www.secret.example.org/"); }HttpURLConnection hu = (HttpURLConnection) u.openConnection(); hu.setRequestMethod("PUT"); hu.connect(); OutputStream os = hu.getOutputStream(); hu.disconnect(); catch (IOException e) {
//...
}Though a connection is successfully made, the connection is unencrypted and it is possible that all sensitive data sent to or received from the server will be read by unintended actors. Example 2 In 2022, the OT:ICEFALL study examined products by 10 different Operational Technology (OT) vendors. The researchers reported 56 vulnerabilities and said that the products were "insecure by design" [REF-1283]. If exploited, these vulnerabilities often allowed adversaries to change how the products operated, ranging from denial of service to changing the code that the products executed. Since these products were often used in industries such as power, electrical, water, and others, there could even be safety implications. Multiple vendors used cleartext transmission of sensitive information in their OT products. Example 3 A TAP accessible register is read/written by a JTAG based tool, for internal use by authorized users. However, an adversary can connect a probing device and collect the values from the unencrypted channel connecting the JTAG interface to the authorized user, if no additional protections are employed. Example 4 The following Azure CLI command lists the properties of a particular storage account: (informative) Example Language: Shell az storage account show -g {ResourceGroupName} -n {StorageAccountName}
The JSON result might be: (bad code) Example Language: JSON
{
"name": "{StorageAccountName}",
}
"enableHttpsTrafficOnly": false, "type": "Microsoft.Storage/storageAccounts" The enableHttpsTrafficOnly value is set to false, because the default setting for Secure transfer is set to Disabled. This allows cloud storage resources to successfully connect and transfer data without the use of encryption (e.g., HTTP, SMB 2.1, SMB 3.0, etc.). Azure's storage accounts can be configured to only accept requests from secure connections made over HTTPS. The secure transfer setting can be enabled using Azure's Portal (GUI) or programmatically by setting the enableHttpsTrafficOnly property to True on the storage account, such as: (good code) Example Language: Shell az storage account update -g {ResourceGroupName} -n {StorageAccountName} --https-only true
The change can be confirmed from the result by verifying that the enableHttpsTrafficOnly value is true: (good code) Example Language: JSON
{
"name": "{StorageAccountName}",
}
"enableHttpsTrafficOnly": true, "type": "Microsoft.Storage/storageAccounts"
Note: to enable secure transfer using Azure's Portal instead of the command line:
![]()
Maintenance The Taxonomy_Mappings to ISA/IEC 62443 were added in CWE 4.10, but they are still under review and might change in future CWE versions. These draft mappings were performed by members of the "Mapping CWE to 62443" subgroup of the CWE-CAPEC ICS/OT Special Interest Group (SIG), and their work is incomplete as of CWE 4.10. The mappings are included to facilitate discussion and review by the broader ICS/OT community, and they are likely to change in future CWE versions.
CWE-1255: Comparison Logic is Vulnerable to Power Side-Channel Attacks
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterA device's real time power consumption may be monitored during security token evaluation and the information gleaned may be used to determine the value of the reference token. The power consumed by a device may be instrumented and monitored in real time. If the algorithm for evaluating security tokens is not sufficiently robust, the power consumption may vary by token entry comparison against the reference value. Further, if retries are unlimited, the power difference between a "good" entry and a "bad" entry may be observed and used to determine whether each entry itself is correct thereby allowing unauthorized parties to calculate the reference value. ![]() ![]()
![]() ![]()
![]()
![]() Languages Class: Not Language-Specific (Undetermined Prevalence) Operating Systems Class: Not OS-Specific (Undetermined Prevalence) Architectures Class: Not Architecture-Specific (Undetermined Prevalence) Technologies Class: Not Technology-Specific (Undetermined Prevalence) ![]()
Example 1 Consider an example hardware module that checks a user-provided password (or PIN) to grant access to a user. The user-provided password is compared against a stored value byte-by-byte. (bad code) Example Language: C
static nonvolatile password_tries = NUM_RETRIES; do
while (password_tries == 0) ; // Hang here if no more password tries
while (true)password_ok = 0; for (i = 0; i < NUM_PW_DIGITS; i++)
if (GetPasswordByte() == stored_password([i])
end
password_ok |= 1; // Power consumption is different here
else
password_ok |= 0; // than from here
if (password_ok > 0)
password_tries = NUM_RETRIES;
password_tries--;break_to_Ok_to_proceed // Password OK Since the algorithm uses a different number of 1's and 0's for password validation, a different amount of power is consumed for the good byte versus the bad byte comparison. Using this information, an attacker may be able to guess the correct password for that byte-by-byte iteration with several repeated attempts by stopping the password evaluation before it completes. Among various options for mitigating the string comparison is obscuring the power consumption by having opposing bit flips during bit operations. Note that in this example, the initial change of the bit values could still provide power indication depending upon the hardware itself. This possibility needs to be measured for verification. (good code) Example Language: C
static nonvolatile password_tries = NUM_RETRIES; do
while (password_tries == 0) ; // Hang here if no more password tries
while (true)password_tries--; // Put retry code here to catch partial retries password_ok = 0; for (i = 0; i < NUM_PW_DIGITS; i++)
if (GetPasswordByte() == stored_password([i])
end
password_ok |= 0x10; // Power consumption here
else
password_ok |= 0x01; // is now the same here
if ((password_ok & 1) == 0)
password_tries = NUM_RETRIES;
break_to_Ok_to_proceed // Password OK Since the algorithm uses a different number of 1's and 0's for password validation, a different amount of power is consumed for the good byte versus the bad byte comparison. Using this information, an attacker may be able to guess the correct password for that byte-by-byte iteration with several repeated attempts by stopping the password evaluation before it completes. An alternative to the previous example is simply comparing the whole password simultaneously. (good code) Example Language: C
static nonvolatile password_tries = NUM_RETRIES; do
while (password_tries == 0) ; // Hang here if no more password tries
while (true)password_tries--; // Put retry code here to catch partial retries for (i = 0; i < NUM_PW_DIGITS; i++)
stored_password([i]) = GetPasswordByte();
endif (stored_password == saved_password)
password_tries = NUM_RETRIES;
break_to_Ok_to_proceed // Password OK Since comparison is done atomically, there is no indication which bytes fail forcing the attacker to brute force the whole password at once. Note that other mitigations may exist such as masking - causing a large current draw to mask individual bit flips. Example 2 This code demonstrates the transfer of a secret key using Serial-In/Serial-Out shift. It's easy to extract the secret using simple power analysis as each shift gives data on a single bit of the key. (bad code) Example Language: Verilog
module siso(clk,rst,a,q);
input a;
endmoduleinput clk,rst; output q; reg q; always@(posedge clk,posedge rst) begin
if(rst==1'b1)
end
q<1'b0;
else
q<a;
This code demonstrates the transfer of a secret key using a Parallel-In/Parallel-Out shift. In a parallel shift, data confounded by multiple bits of the key, not just one. (good code) Example Language: Verilog
module pipo(clk,rst,a,q);
input clk,rst;
endmoduleinput[3:0]a; output[3:0]q; reg[3:0]q; always@(posedge clk,posedge rst) begin
if (rst==1'b1)
end
q<4'b0000;
else
q<a;
![]()
CWE CATEGORY: Core and Compute Issues
Weaknesses in this category are typically associated with CPUs, Graphics, Vision, AI, FPGA, and microcontrollers.
Mapping Use for Mapping: Prohibited (this CWE ID must not be used to map to real-world vulnerabilities). Rationale: this entry is a Category. Using categories for mapping has been an actively discouraged practice since at least 2019. Categories are informal organizational groupings of weaknesses that help navigation and browsing by CWE users, but they are not weaknesses in themselves. Comments: See member weaknesses of this category. CWE-1252: CPU Hardware Not Configured to Support Exclusivity of Write and Execute Operations
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe CPU is not configured to provide hardware support for exclusivity of write and execute operations on memory. This allows an attacker to execute data from all of memory. CPUs provide a special bit that supports exclusivity of write and execute operations. This bit is used to segregate areas of memory to either mark them as code (instructions, which can be executed) or data (which should not be executed). In this way, if a user can write to a region of memory, the user cannot execute from that region and vice versa. This exclusivity provided by special hardware bit is leveraged by the operating system to protect executable space. While this bit is available in most modern processors by default, in some CPUs the exclusivity is implemented via a memory-protection unit (MPU) and memory-management unit (MMU) in which memory regions can be carved out with exact read, write, and execute permissions. However, if the CPU does not have an MMU/MPU, then there is no write exclusivity. Without configuring exclusivity of operations via segregated areas of memory, an attacker may be able to inject malicious code onto memory and later execute it. ![]() ![]()
![]() ![]()
![]()
![]() Languages Class: Not Language-Specific (Undetermined Prevalence) Operating Systems Class: Not OS-Specific (Undetermined Prevalence) Architectures Class: Not Architecture-Specific (Undetermined Prevalence) Technologies Microcontroller Hardware (Undetermined Prevalence) Processor Hardware (Undetermined Prevalence) ![]()
Example 1 MCS51 Microcontroller (based on 8051) does not have a special bit to support write exclusivity. It also does not have an MMU/MPU support. The Cortex-M CPU has an optional MPU that supports up to 8 regions. (bad code) Example Language: Other The optional MPU is not configured.
If the MPU is not configured, then an attacker will be able to inject malicious data into memory and execute it.
![]()
CWE CATEGORY: Cross-Cutting Problems
Weaknesses in this category can arise in multiple areas of hardware design or can apply to a wide cross-section of components.
Mapping Use for Mapping: Prohibited (this CWE ID must not be used to map to real-world vulnerabilities). Rationale: this entry is a Category. Using categories for mapping has been an actively discouraged practice since at least 2019. Categories are informal organizational groupings of weaknesses that help navigation and browsing by CWE users, but they are not weaknesses in themselves. Comments: See member weaknesses of this category. CWE-1279: Cryptographic Operations are run Before Supporting Units are Ready
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterPerforming cryptographic operations without ensuring that the supporting inputs are ready to supply valid data may compromise the cryptographic result. Many cryptographic hardware units depend upon other hardware units to supply information to them to produce a securely encrypted result. For example, a cryptographic unit that depends on an external random-number-generator (RNG) unit for entropy must wait until the RNG unit is producing random numbers. If a cryptographic unit retrieves a private encryption key from a fuse unit, the fuse unit must be up and running before a key may be supplied. ![]() ![]()
![]() ![]()
![]()
![]() Languages Verilog (Undetermined Prevalence) VHDL (Undetermined Prevalence) Class: Not Language-Specific (Undetermined Prevalence) Operating Systems Class: Not OS-Specific (Undetermined Prevalence) Architectures Class: Not Architecture-Specific (Undetermined Prevalence) Technologies Processor Hardware (Undetermined Prevalence) Class: Not Technology-Specific (Undetermined Prevalence) ![]()
Example 1 The following pseudocode illustrates the weak encryption resulting from the use of a pseudo-random-number generator output. (bad code) Example Language: Pseudocode
If random_number_generator_self_test_passed() == TRUE then Seed = get_random_number_from_RNG() else Seed = hardcoded_number In the example above, first a check of RNG ready is performed. If the check fails, the RNG is ignored and a hard coded value is used instead. The hard coded value severely weakens the encrypted output. (good code) Example Language: Pseudocode
If random_number_generator_self_test_passed() == TRUE then Seed = get_random_number_from_RNG() else enter_error_state()
![]()
CWE CATEGORY: Debug and Test Problems
Weaknesses in this category are related to hardware debug and test interfaces such as JTAG and scan chain.
Mapping Use for Mapping: Prohibited (this CWE ID must not be used to map to real-world vulnerabilities). Rationale: this entry is a Category. Using categories for mapping has been an actively discouraged practice since at least 2019. Categories are informal organizational groupings of weaknesses that help navigation and browsing by CWE users, but they are not weaknesses in themselves. Comments: See member weaknesses of this category. CWE-1295: Debug Messages Revealing Unnecessary Information
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe product fails to adequately prevent the revealing of unnecessary and potentially sensitive system information within debugging messages. Debug messages are messages that help troubleshoot an issue by revealing the internal state of the system. For example, debug data in design can be exposed through internal memory array dumps or boot logs through interfaces like UART via TAP commands, scan chain, etc. Thus, the more information contained in a debug message, the easier it is to debug. However, there is also the risk of revealing information that could help an attacker either decipher a vulnerability, and/or gain a better understanding of the system. Thus, this extra information could lower the "security by obscurity" factor. While "security by obscurity" alone is insufficient, it can help as a part of "Defense-in-depth". ![]() ![]()
![]() ![]()
![]()
![]() Languages Class: Not Language-Specific (Undetermined Prevalence) Operating Systems Class: Not OS-Specific (Undetermined Prevalence) Architectures Class: Not Architecture-Specific (Undetermined Prevalence) Technologies Class: Not Technology-Specific (Undetermined Prevalence) ![]()
Example 1 This example here shows how an attacker can take advantage of unnecessary information in debug messages. Example 1: Suppose in response to a Test Access Port (TAP) chaining request the debug message also reveals the current TAP hierarchy (the full topology) in addition to the success/failure message. Example 2: In response to a password-filling request, the debug message, instead of a simple Granted/Denied response, prints an elaborate message, "The user-entered password does not match the actual password stored in <directory name>." The result of the above examples is that the user is able to gather additional unauthorized information about the system from the debug messages. The solution is to ensure that Debug messages do not reveal additional details.
![]()
CWE-1273: Device Unlock Credential Sharing
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe credentials necessary for unlocking a device are shared across multiple parties and may expose sensitive information. "Unlocking a device" often means activating certain unadvertised debug and manufacturer-specific capabilities of a device using sensitive credentials. Unlocking a device might be necessary for the purpose of troubleshooting device problems. For example, suppose a device contains the ability to dump the content of the full system memory by disabling the memory-protection mechanisms. Since this is a highly security-sensitive capability, this capability is "locked" in the production part. Unless the device gets unlocked by supplying the proper credentials, the debug capabilities are not available. For cases where the chip designer, chip manufacturer (fabricator), and manufacturing and assembly testers are all employed by the same company, the risk of compromise of the credentials is greatly reduced. However, the risk is greater when the chip designer is employed by one company, the chip manufacturer is employed by another company (a foundry), and the assemblers and testers are employed by yet a third company. Since these different companies will need to perform various tests on the device to verify correct device function, they all need to share the unlock key. Unfortunately, the level of secrecy and policy might be quite different at each company, greatly increasing the risk of sensitive credentials being compromised. ![]() ![]()
![]() ![]()
![]()
![]() Languages VHDL (Undetermined Prevalence) Verilog (Undetermined Prevalence) Class: Compiled (Undetermined Prevalence) Operating Systems Class: Not OS-Specific (Undetermined Prevalence) Architectures Class: Not Architecture-Specific (Undetermined Prevalence) Technologies Other (Undetermined Prevalence) Class: Not Technology-Specific (Undetermined Prevalence) ![]()
Example 1 This example shows how an attacker can take advantage of compromised credentials. (bad code) Suppose a semiconductor chipmaker, "C", uses the foundry "F" for fabricating its chips. Now, F has many other customers in addition to C, and some of the other customers are much smaller companies. F has dedicated teams for each of its customers, but somehow it mixes up the unlock credentials and sends the unlock credentials of C to the wrong team. This other team does not take adequate precautions to protect the credentials that have nothing to do with them, and eventually the unlock credentials of C get leaked. When the credentials of multiple organizations are stored together, exposure to third parties occurs frequently. (good code) Vertical integration of a production company is one effective method of protecting sensitive credentials. Where vertical integration is not possible, strict access control and need-to-know are methods which can be implemented to reduce these risks.
![]()
Maintenance This entry is still under development and will continue to see updates and content improvements.
CWE-1190: DMA Device Enabled Too Early in Boot Phase
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe product enables a Direct Memory Access (DMA) capable device before the security configuration settings are established, which allows an attacker to extract data from or gain privileges on the product. DMA is included in a number of devices because it allows data transfer between the computer and the connected device, using direct hardware access to read or write directly to main memory without any OS interaction. An attacker could exploit this to access secrets. Several virtualization-based mitigations have been introduced to thwart DMA attacks. These are usually configured/setup during boot time. However, certain IPs that are powered up before boot is complete (known as early boot IPs) may be DMA capable. Such IPs, if not trusted, could launch DMA attacks and gain access to assets that should otherwise be protected. ![]() ![]()
![]() ![]()
![]()
![]() Languages Class: Not Language-Specific (Undetermined Prevalence) Technologies Class: System on Chip (Undetermined Prevalence) ![]()
![]()
CWE-440: Expected Behavior Violation
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom Filter![]() ![]()
![]() ![]()
![]() ![]()
![]()
![]() Languages Class: Not Language-Specific (Undetermined Prevalence) Technologies Class: ICS/OT (Undetermined Prevalence) ![]()
![]()
Theoretical The behavior of an application that is not consistent with the expectations of the developer may lead to incorrect use of the software. CWE-1258: Exposure of Sensitive System Information Due to Uncleared Debug Information
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe hardware does not fully clear security-sensitive values, such as keys and intermediate values in cryptographic operations, when debug mode is entered. Security sensitive values, keys, intermediate steps of cryptographic operations, etc. are stored in temporary registers in the hardware. If these values are not cleared when debug mode is entered they may be accessed by a debugger allowing sensitive information to be accessible by untrusted parties. ![]() ![]()
![]() ![]()
![]()
![]() Languages Class: Not Language-Specific (Undetermined Prevalence) Operating Systems Class: Not OS-Specific (Undetermined Prevalence) Architectures Class: Not Architecture-Specific (Undetermined Prevalence) Technologies Class: Not Technology-Specific (Undetermined Prevalence) ![]()
Example 1 A cryptographic core in a System-On-a-Chip (SoC) is used for cryptographic acceleration and implements several cryptographic operations (e.g., computation of AES encryption and decryption, SHA-256, HMAC, etc.). The keys for these operations or the intermediate values are stored in registers internal to the cryptographic core. These internal registers are in the Memory Mapped Input Output (MMIO) space and are blocked from access by software and other untrusted agents on the SoC. These registers are accessible through the debug and test interface. (bad code) Example Language: Other In the above scenario, registers that store keys and intermediate values of cryptographic operations are not cleared when system enters debug mode. An untrusted actor running a debugger may read the contents of these registers and gain access to secret keys and other sensitive cryptographic information. (good code) Example Language: Other Whenever the chip enters debug mode, all registers containing security-sensitive data are be cleared rendering them unreadable.
![]()
CWE-1316: Fabric-Address Map Allows Programming of Unwarranted Overlaps of Protected and Unprotected Ranges
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe address map of the on-chip fabric has protected and unprotected regions overlapping, allowing an attacker to bypass access control to the overlapping portion of the protected region. Various ranges can be defined in the system-address map, either in the memory or in Memory-Mapped-IO (MMIO) space. These ranges are usually defined using special range registers that contain information, such as base address and size. Address decoding is the process of determining for which range the incoming transaction is destined. To ensure isolation, ranges containing secret data are access-control protected. Occasionally, these ranges could overlap. The overlap could either be intentional (e.g. due to a limited number of range registers or limited choice in choosing size of the range) or unintentional (e.g. introduced by errors). Some hardware designs allow dynamic remapping of address ranges assigned to peripheral MMIO ranges. In such designs, intentional address overlaps can be created through misconfiguration by malicious software. When protected and unprotected ranges overlap, an attacker could send a transaction and potentially compromise the protections in place, violating the principle of least privilege. ![]() ![]()
![]() ![]()
![]()
![]() Languages Class: Not Language-Specific (Undetermined Prevalence) Operating Systems Class: Not OS-Specific (Undetermined Prevalence) Architectures Class: Not Architecture-Specific (Undetermined Prevalence) Technologies Bus/Interface Hardware (Undetermined Prevalence) Class: Not Technology-Specific (Undetermined Prevalence) ![]()
Example 1 An on-chip fabric supports a 64KB address space that is memory-mapped. The fabric has two range registers that support creation of two protected ranges with specific size constraints--4KB, 8KB, 16KB or 32KB. Assets that belong to user A require 4KB, and those of user B require 20KB. Registers and other assets that are not security-sensitive require 40KB. One range register is configured to program 4KB to protect user A's assets. Since a 20KB range cannot be created with the given size constraints, the range register for user B's assets is configured as 32KB. The rest of the address space is left as open. As a result, some part of untrusted and open-address space overlaps with user B range. The fabric does not support least privilege, and an attacker can send a transaction to the overlapping region to tamper with user B data. Since range B only requires 20KB but is allotted 32KB, there is 12KB of reserved space. Overlapping this region of user B data, where there are no assets, with the untrusted space will prevent an attacker from tampering with user B data.
![]()
CWE-1209: Failure to Disable Reserved Bits
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe reserved bits in a hardware design are not disabled prior to production. Typically, reserved bits are used for future capabilities and should not support any functional logic in the design. However, designers might covertly use these bits to debug or further develop new capabilities in production hardware. Adversaries with access to these bits will write to them in hopes of compromising hardware state. Reserved bits are labeled as such so they can be allocated for a later purpose. They are not to do anything in the current design. However, designers might want to use these bits to debug or control/configure a future capability to help minimize time to market (TTM). If the logic being controlled by these bits is still enabled in production, an adversary could use the logic to induce unwanted/unsupported behavior in the hardware. ![]() ![]()
![]() ![]()
![]()
![]() Languages Class: Not Language-Specific (Undetermined Prevalence) Operating Systems Class: Not OS-Specific (Undetermined Prevalence) Architectures Class: Not Architecture-Specific (Undetermined Prevalence) Technologies Class: System on Chip (Undetermined Prevalence) ![]()
Example 1 Assume a hardware Intellectual Property (IP) has address space 0x0-0x0F for its configuration registers, with the last one labeled reserved (i.e. 0x0F). Therefore inside the Finite State Machine (FSM), the code is as follows: (bad code) Example Language: Verilog
reg gpio_out = 0; //gpio should remain low for normal operation case (register_address)
4'b1111 : //0x0F
begin
gpio_out = 1;
end
An adversary may perform writes to reserved address space in hopes of changing the behavior of the hardware. In the code above, the GPIO pin should remain low for normal operation. However, it can be asserted by accessing the reserved address space (0x0F). This may be a concern if the GPIO state is being used as an indicator of health (e.g. if asserted the hardware may respond by shutting down or resetting the system, which may not be the correct action the system should perform). In the code below, the condition "register_address = 0X0F" is commented out, and a default is provided that will catch any values of register_address not explicitly accounted for and take no action with regards to gpio_out. This means that an attacker who is able to write 0X0F to register_address will not enable any undocumented "features" in the process. (good code) Example Language: Verilog
reg gpio_out = 0; //gpio should remain low for normal operation case (register_address)
//4'b1111 : //0x0F
default: gpio_out = gpio_out;
![]()
CWE-1277: Firmware Not Updateable
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe product does not provide its users with the ability to update or patch its firmware to address any vulnerabilities or weaknesses that may be present. Without the ability to patch or update firmware, consumers will be left vulnerable to exploitation of any known vulnerabilities, or any vulnerabilities that are discovered in the future. This can expose consumers to permanent risk throughout the entire lifetime of the device, which could be years or decades. Some external protective measures and mitigations might be employed to aid in preventing or reducing the risk of malicious attack, but the root weakness cannot be corrected. ![]() ![]()
![]() ![]()
![]()
![]() Languages Class: Not Language-Specific (Undetermined Prevalence) Operating Systems Class: Not OS-Specific (Undetermined Prevalence) Architectures Class: Not Architecture-Specific (Undetermined Prevalence) Technologies Class: Not Technology-Specific (Undetermined Prevalence) ![]()
Example 1 A refrigerator has an Internet interface for the official purpose of alerting the manufacturer when that refrigerator detects a fault. Because the device is attached to the Internet, the refrigerator is a target for hackers who may wish to use the device other potentially more nefarious purposes. (bad code) Example Language: Other The refrigerator has no means of patching and is hacked becoming a spewer of email spam. (good code) Example Language: Other The device automatically patches itself and provides considerable more protection against being hacked.
![]()
Terminology The "firmware" term does not have a single commonly-shared definition, so there may be variations in how this CWE entry is interpreted during mapping.
CWE CATEGORY: General Circuit and Logic Design Concerns
Weaknesses in this category are related to hardware-circuit design and logic (e.g., CMOS transistors, finite state machines, and registers) as well as issues related to hardware description languages such as System Verilog and VHDL.
Mapping Use for Mapping: Prohibited (this CWE ID must not be used to map to real-world vulnerabilities). Rationale: this entry is a Category. Using categories for mapping has been an actively discouraged practice since at least 2019. Categories are informal organizational groupings of weaknesses that help navigation and browsing by CWE users, but they are not weaknesses in themselves. Comments: See member weaknesses of this category. CWE-1270: Generation of Incorrect Security Tokens
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe product implements a Security Token mechanism to differentiate what actions are allowed or disallowed when a transaction originates from an entity. However, the Security Tokens generated in the system are incorrect. Systems-On-a-Chip (SoC) (Integrated circuits and hardware engines) implement Security Tokens to differentiate and identify actions originated from various agents. These actions could be "read", "write", "program", "reset", "fetch", "compute", etc. Security Tokens are generated and assigned to every agent on the SoC that is either capable of generating an action or receiving an action from another agent. Every agent could be assigned a unique, Security Token based on its trust level or privileges. Incorrectly generated Security Tokens could result in the same token used for multiple agents or multiple tokens being used for the same agent. This condition could result in a Denial-of-Service (DoS) or the execution of an action that in turn could result in privilege escalation or unintended access. ![]() ![]()
![]() ![]()
![]()
![]() Languages Class: Not Language-Specific (Undetermined Prevalence) Operating Systems Class: Not OS-Specific (Undetermined Prevalence) Architectures Class: Not Architecture-Specific (Undetermined Prevalence) Technologies Class: Not Technology-Specific (Undetermined Prevalence) ![]()
Example 1 Consider a system with a register for storing an AES key for encryption or decryption. The key is 128 bits long implemented as a set of four 32-bit registers. The key registers are assets, and register, AES_KEY_ACCESS_POLICY, is defined to provide necessary access controls. The access-policy register defines which agents, using a Security Token, may access the AES-key registers. Each bit in this 32-bit register is used to define a Security Token. There could be a maximum of 32 Security Tokens that are allowed access to the AES-key registers. When set (bit = "1") bit number allows action from an agent whose identity matches that bit number. If Clear (bit = "0") the action is disallowed for the corresponding agent. Assume the system has two agents: a Main-controller and an Aux-controller. The respective Security Tokens are "1" and "2".
An agent with a Security Token "1" has access to AES_ENC_DEC_KEY_0 through AES_ENC_DEC_KEY_3 registers. As per the above access policy, the AES-Key-access policy allows access to the AES-key registers if the security Token is "1". (bad code) Example Language: Other The SoC incorrectly generates Security Token "1" for every agent. In other words, both Main-controller and Aux-controller are assigned Security Token "1". Both agents have access to the AES-key registers. (good code) Example Language: Other The SoC should correctly generate Security Tokens, assigning "1" to the Main-controller and "2" to the Aux-controller
![]()
CWE-1313: Hardware Allows Activation of Test or Debug Logic at Runtime
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterDuring runtime, the hardware allows for test or debug logic (feature) to be activated, which allows for changing the state of the hardware. This feature can alter the intended behavior of the system and allow for alteration and leakage of sensitive data by an adversary. An adversary can take advantage of test or debug logic that is made accessible through the hardware during normal operation to modify the intended behavior of the system. For example, an accessible Test/debug mode may allow read/write access to any system data. Using error injection (a common test/debug feature) during a transmit/receive operation on a bus, data may be modified to produce an unintended message. Similarly, confidentiality could be compromised by such features allowing access to secrets. ![]() ![]()
![]() ![]()
![]()
![]() Languages Class: Not Language-Specific (Undetermined Prevalence) Operating Systems Class: Not OS-Specific (Undetermined Prevalence) Architectures Class: Not Architecture-Specific (Undetermined Prevalence) Technologies Class: Not Technology-Specific (Undetermined Prevalence) ![]()
![]()
CWE-1276: Hardware Child Block Incorrectly Connected to Parent System
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterSignals between a hardware IP and the parent system design are incorrectly connected causing security risks. Individual hardware IP must communicate with the parent system in order for the product to function correctly and as intended. If implemented incorrectly, while not causing any apparent functional issues, may cause security issues. For example, if the IP should only be reset by a system-wide hard reset, but instead the reset input is connected to a software-triggered debug mode reset (which is also asserted during a hard reset), integrity of data inside the IP can be violated. ![]() ![]()
![]() ![]()
![]()
![]() Languages Class: Not Language-Specific (Undetermined Prevalence) Operating Systems Class: Not OS-Specific (Undetermined Prevalence) Architectures Class: Not Architecture-Specific (Undetermined Prevalence) Technologies Class: Not Technology-Specific (Undetermined Prevalence) ![]()
Example 1 Many SoCs use hardware to partition system resources between trusted and un-trusted entities. One example of this concept is the Arm TrustZone, in which the processor and all security-aware IP attempt to isolate resources based on the status of a privilege bit. This privilege bit is part of the input interface in all TrustZone-aware IP. If this privilege bit is accidentally grounded or left unconnected when the IP is instantiated, privilege escalation of all input data may occur. (bad code) Example Language: Verilog
// IP definition module tz_peripheral(clk, reset, data_in, data_in_security_level, ...);
input clk, reset;
endmoduleinput [31:0] data_in; input data_in_security_level; ... // Instantiation of IP in a parent system module soc(...)
...
endmoduletz_peripheral u_tz_peripheral(
.clk(clk),
);.rst(rst), .data_in(rdata), //Copy-and-paste error or typo grounds data_in_security_level (in this example 0=secure, 1=non-secure) effectively promoting all data to "secure") .data_in_security_level(1'b0), ... In the Verilog code below, the security level input to the TrustZone aware peripheral is correctly driven by an appropriate signal instead of being grounded. (good code) Example Language: Verilog
// Instantiation of IP in a parent system module soc(...)
...
endmoduletz_peripheral u_tz_peripheral(
.clk(clk),
);.rst(rst), .data_in(rdata), // This port is no longer grounded, but instead driven by the appropriate signal .data_in_security_level(rdata_security_level), ...
![]()
CWE-1234: Hardware Internal or Debug Modes Allow Override of Locks
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterDevice configuration controls are commonly programmed after a device power reset by a trusted firmware or software module (e.g., BIOS/bootloader) and then locked from any further modification. This is commonly implemented using a trusted lock bit, which when set, disables writes to a protected set of registers or address regions. The lock protection is intended to prevent modification of certain system configuration (e.g., memory/memory protection unit configuration). If debug features supported by hardware or internal modes/system states are supported in the hardware design, modification of the lock protection may be allowed allowing access and modification of configuration information. ![]() ![]()
![]() ![]()
![]()
![]() Languages Class: Not Language-Specific (Undetermined Prevalence) Operating Systems Class: Not OS-Specific (Undetermined Prevalence) Architectures Class: Not Architecture-Specific (Undetermined Prevalence) Technologies Class: Not Technology-Specific (Undetermined Prevalence) ![]()
Example 1
For example, consider the example Locked_override_register example. This register module supports a lock mode that blocks any writes after lock is set to 1.
(bad code) Example Language: Verilog
module Locked_register_example ( input [15:0] Data_in, input Clk, input resetn, input write, input Lock, input scan_mode, input debug_unlocked, output reg [15:0] Data_out ); reg lock_status; always @(posedge Clk or negedge resetn)
if (~resetn) // Register is reset resetn
always @(posedge Clk or negedge resetn)begin
lock_status <= 1'b0;
endelse if (Lock) begin
lock_status <= 1'b1;
endelse if (~Lock) begin
lock_status <= lock_status
end
if (~resetn) // Register is reset resetn
endmodulebegin
Data_out <= 16'h0000;
endelse if (write & (~lock_status | scan_mode | debug_unlocked) ) // Register protected by Lock bit input, overrides supported for scan_mode & debug_unlocked begin
Data_out <= Data_in;
endelse if (~write) begin
Data_out <= Data_out;
endIf either the scan_mode or the debug_unlocked modes can be triggered by software, then the lock protection may be bypassed. (good code)
Either remove the debug and scan mode overrides or protect enabling of these modes so that only trusted and authorized users may enable these modes.
![]()
CWE-1298: Hardware Logic Contains Race Conditions
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterA race condition in the hardware logic results in undermining security guarantees of the system. A race condition in logic circuits typically occurs when a logic gate gets inputs from signals that have traversed different paths while originating from the same source. Such inputs to the gate can change at slightly different times in response to a change in the source signal. This results in a timing error or a glitch (temporary or permanent) that causes the output to change to an unwanted state before settling back to the desired state. If such timing errors occur in access control logic or finite state machines that are implemented in security sensitive flows, an attacker might exploit them to circumvent existing protections. ![]() ![]()
![]() ![]()
![]()
![]() Languages Verilog (Undetermined Prevalence) VHDL (Undetermined Prevalence) Technologies Class: System on Chip (Undetermined Prevalence) ![]()
Example 1 The code below shows a 2x1 multiplexor using logic gates. Though the code shown below results in the minimum gate solution, it is disjoint and causes glitches. (bad code) Example Language: Verilog
// 2x1 Multiplexor using logic-gates module glitchEx(
input wire in0, in1, sel,
);output wire z wire not_sel; wire and_out1, and_out2; assign not_sel = ~sel; assign and_out1 = not_sel & in0; assign and_out2 = sel & in1; // Buggy line of code: assign z = and_out1 | and_out2; // glitch in signal z endmodule The buggy line of code, commented above, results in signal 'z' periodically changing to an unwanted state. Thus, any logic that references signal 'z' may access it at a time when it is in this unwanted state. This line should be replaced with the line shown below in the Good Code Snippet which results in signal 'z' remaining in a continuous, known, state. Reference for the above code, along with waveforms for simulation can be found in the references below. (good code) Example Language: Verilog
assign z <= and_out1 or and_out2 or (in0 and in1);
This line of code removes the glitch in signal z.
![]()
CWE-1264: Hardware Logic with Insecure De-Synchronization between Control and Data Channels
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe hardware logic for error handling and security checks can incorrectly forward data before the security check is complete. Many high-performance on-chip bus protocols and processor data-paths employ separate channels for control and data to increase parallelism and maximize throughput. Bugs in the hardware logic that handle errors and security checks can make it possible for data to be forwarded before the completion of the security checks. If the data can propagate to a location in the hardware observable to an attacker, loss of data confidentiality can occur. 'Meltdown' is a concrete example of how de-synchronization between data and permissions checking logic can violate confidentiality requirements. Data loaded from a page marked as privileged was returned to the cpu regardless of current privilege level for performance reasons. The assumption was that the cpu could later remove all traces of this data during the handling of the illegal memory access exception, but this assumption was proven false as traces of the secret data were not removed from the microarchitectural state. ![]() ![]()
![]() ![]()
![]()
![]() Languages Class: Not Language-Specific (Undetermined Prevalence) Operating Systems Class: Not OS-Specific (Undetermined Prevalence) Architectures Class: Not Architecture-Specific (Undetermined Prevalence) Technologies Class: Not Technology-Specific (Undetermined Prevalence) ![]()
Example 1 There are several standard on-chip bus protocols used in modern SoCs to allow communication between components. There are a wide variety of commercially available hardware IP implementing the interconnect logic for these protocols. A bus connects components which initiate/request communications such as processors and DMA controllers (bus masters) with peripherals which respond to requests. In a typical system, the privilege level or security designation of the bus master along with the intended functionality of each peripheral determine the security policy specifying which specific bus masters can access specific peripherals. This security policy (commonly referred to as a bus firewall) can be enforced using separate IP/logic from the actual interconnect responsible for the data routing. (bad code) Example Language: Other The firewall and data routing logic becomes de-synchronized due to a hardware logic bug allowing components that should not be allowed to communicate to share data. For example, consider an SoC with two processors. One is being used as a root of trust and can access a cryptographic key storage peripheral. The other processor (application cpu) may run potentially untrusted code and should not access the key store. If the application cpu can issue a read request to the key store which is not blocked due to de-synchronization of data routing and the bus firewall, disclosure of cryptographic keys is possible. (good code) Example Language: Other All data is correctly buffered inside the interconnect until the firewall has determined that the endpoint is allowed to receive the data.
![]()
Maintenance As of CWE 4.9, members of the CWE Hardware SIG are closely analyzing this entry and others to improve CWE's coverage of transient execution weaknesses, which include issues related to Spectre, Meltdown, and other attacks. Additional investigation may include other weaknesses related to microarchitectural state. As a result, this entry might change significantly in CWE 4.10. CWE-1257: Improper Access Control Applied to Mirrored or Aliased Memory Regions
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterAliased or mirrored memory regions in hardware designs may have inconsistent read/write permissions enforced by the hardware. A possible result is that an untrusted agent is blocked from accessing a memory region but is not blocked from accessing the corresponding aliased memory region. Hardware product designs often need to implement memory protection features that enable privileged software to define isolated memory regions and access control (read/write) policies. Isolated memory regions can be defined on different memory spaces in a design (e.g. system physical address, virtual address, memory mapped IO). Each memory cell should be mapped and assigned a system address that the core software can use to read/write to that memory. It is possible to map the same memory cell to multiple system addresses such that read/write to any of the aliased system addresses would be decoded to the same memory cell. This is commonly done in hardware designs for redundancy and simplifying address decoding logic. If one of the memory regions is corrupted or faulty, then that hardware can switch to using the data in the mirrored memory region. Memory aliases can also be created in the system address map if the address decoder unit ignores higher order address bits when mapping a smaller address region into the full system address. A common security weakness that can exist in such memory mapping is that aliased memory regions could have different read/write access protections enforced by the hardware such that an untrusted agent is blocked from accessing a memory address but is not blocked from accessing the corresponding aliased memory address. Such inconsistency can then be used to bypass the access protection of the primary memory block and read or modify the protected memory. An untrusted agent could also possibly create memory aliases in the system address map for malicious purposes if it is able to change the mapping of an address region or modify memory region sizes. ![]() ![]()
![]() ![]()
![]()
![]() Languages Class: Not Language-Specific (Undetermined Prevalence) Operating Systems Class: Not OS-Specific (Undetermined Prevalence) Architectures Class: Not Architecture-Specific (Undetermined Prevalence) Technologies Memory Hardware (Undetermined Prevalence) Processor Hardware (Undetermined Prevalence) Microcontroller Hardware (Undetermined Prevalence) Network on Chip Hardware (Undetermined Prevalence) Class: System on Chip (Undetermined Prevalence) ![]()
Example 1
In a System-on-a-Chip (SoC) design the system fabric uses 16 bit addresses. An IP unit (Unit_A) has 4 kilobyte of internal memory which is mapped into a 16 kilobyte address range in the system fabric address map.
To protect the register controls in Unit_A unprivileged software is blocked from accessing addresses between 0x0000 - 0x0FFF. The address decoder of Unit_A masks off the higher order address bits and decodes only the lower 12 bits for computing the offset into the 4 kilobyte internal memory space. (bad code) Example Language: Other In this design the aliased memory address ranges are these: 0x0000 - 0x0FFF 0x1000 - 0x1FFF 0x2000 - 0x2FFF 0x3000 - 0x3FFF The same register can be accessed using four different addresses: 0x0000, 0x1000, 0x2000, 0x3000. The system address filter only blocks access to range 0x0000 - 0x0FFF and does not block access to the aliased addresses in 0x1000 - 0x3FFF range. Thus, untrusted software can leverage the aliased memory addresses to bypass the memory protection. (good code) Example Language: Other In this design the aliased memory addresses (0x1000 - 0x3FFF) could be blocked from all system software access since they are not used by software. Alternately, the MPU logic can be changed to apply the memory protection policies to the full address range mapped to Unit_A (0x0000 - 0x3FFF).
![]()
CWE-1262: Improper Access Control for Register Interface
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe product uses memory-mapped I/O registers that act as an interface to hardware functionality from software, but there is improper access control to those registers. Software commonly accesses peripherals in a System-on-Chip (SoC) or other device through a memory-mapped register interface. Malicious software could tamper with any security-critical hardware data that is accessible directly or indirectly through the register interface, which could lead to a loss of confidentiality and integrity. ![]() ![]()
![]() ![]()
![]()
![]() Languages Class: Not Language-Specific (Undetermined Prevalence) Operating Systems Class: Not OS-Specific (Undetermined Prevalence) Architectures Class: Not Architecture-Specific (Undetermined Prevalence) Technologies Class: Not Technology-Specific (Undetermined Prevalence) ![]()
Example 1 The register interface provides software access to hardware functionality. This functionality is an attack surface. This attack surface may be used to run untrusted code on the system through the register interface. As an example, cryptographic accelerators require a mechanism for software to select modes of operation and to provide plaintext or ciphertext data to be encrypted or decrypted as well as other functions. This functionality is commonly provided through registers. (bad code) Cryptographic key material stored in registers inside the cryptographic accelerator can be accessed by software. (good code) Key material stored in registers should never be accessible to software. Even if software can provide a key, all read-back paths to software should be disabled.
![]()
CWE-1274: Improper Access Control for Volatile Memory Containing Boot Code
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe product conducts a secure-boot process that transfers bootloader code from Non-Volatile Memory (NVM) into Volatile Memory (VM), but it does not have sufficient access control or other protections for the Volatile Memory. Adversaries could bypass the secure-boot process and execute their own untrusted, malicious boot code. As a part of a secure-boot process, the read-only-memory (ROM) code for a System-on-Chip (SoC) or other system fetches bootloader code from Non-Volatile Memory (NVM) and stores the code in Volatile Memory (VM), such as dynamic, random-access memory (DRAM) or static, random-access memory (SRAM). The NVM is usually external to the SoC, while the VM is internal to the SoC. As the code is transferred from NVM to VM, it is authenticated by the SoC's ROM code. If the volatile-memory-region protections or access controls are insufficient to prevent modifications from an adversary or untrusted agent, the secure boot may be bypassed or replaced with the execution of an adversary's code. ![]() ![]()
![]() ![]()
![]()
![]() Languages Class: Not Language-Specific (Undetermined Prevalence) Operating Systems Class: Not OS-Specific (Undetermined Prevalence) Architectures Class: Not Architecture-Specific (Undetermined Prevalence) Technologies Class: Not Technology-Specific (Undetermined Prevalence) ![]()
Example 1 A typical SoC secure boot's flow includes fetching the next piece of code (i.e., the boot loader) from NVM (e.g., serial, peripheral interface (SPI) flash), and transferring it to DRAM/SRAM volatile, internal memory, which is more efficient. (bad code) The volatile-memory protections or access controls are insufficient. The memory from where the boot loader executes can be modified by an adversary. (good code) A good architecture should define appropriate protections or access controls to prevent modification by an adversary or untrusted agent, once the bootloader is authenticated.
![]()
CWE-1317: Improper Access Control in Fabric Bridge
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterThe product uses a fabric bridge for transactions between two Intellectual Property (IP) blocks, but the bridge does not properly perform the expected privilege, identity, or other access control checks between those IP blocks. In hardware designs, different IP blocks are connected through interconnect-bus fabrics (e.g. AHB and OCP). Within a System on Chip (SoC), the IP block subsystems could be using different bus protocols. In such a case, the IP blocks are then linked to the central bus (and to other IP blocks) through a fabric bridge. Bridges are used as bus-interconnect-routing modules that link different protocols or separate, different segments of the overall SoC interconnect. For overall system security, it is important that the access-control privileges associated with any fabric transaction are consistently maintained and applied, even when they are routed or translated by a fabric bridge. A bridge that is connected to a fabric without security features forwards transactions to the slave without checking the privilege level of the master and results in a weakness in SoC access-control security. The same weakness occurs if a bridge does not check the hardware identity of the transaction received from the slave interface of the bridge. ![]() ![]()
![]() ![]()
![]()
![]() Languages Class: Not Language-Specific (Undetermined Prevalence) Operating Systems Class: Not OS-Specific (Undetermined Prevalence) Architectures Class: Not Architecture-Specific (Undetermined Prevalence) Technologies Processor Hardware (Undetermined Prevalence) Class: Not Technology-Specific (Undetermined Prevalence) ![]()
Example 1 This example is from CVE-2019-6260 [REF-1138]. The iLPC2AHB bridge connects a CPU (with multiple, privilege levels, such as user, super user, debug, etc.) over AHB interface to an LPC bus. Several peripherals are connected to the LPC bus. The bridge is expected to check the privilege level of the transactions initiated in the core before forwarding them to the peripherals on the LPC bus. The bridge does not implement the checks and allows reads and writes from all privilege levels. To address this, designers should implement hardware-based checks that are either hardcoded to block untrusted agents from accessing secure peripherals or implement firmware flows that configure the bridge to block untrusted agents from making arbitrary reads or writes.
![]()
CWE-1245: Improper Finite State Machines (FSMs) in Hardware Logic
View customized information:
For users who are interested in more notional aspects of a weakness. Example: educators, technical writers, and project/program managers.
For users who are concerned with the practical application and details about the nature of a weakness and how to prevent it from happening. Example: tool developers, security researchers, pen-testers, incident response analysts.
For users who are mapping an issue to CWE/CAPEC IDs, i.e., finding the most appropriate CWE for a specific issue (e.g., a CVE record). Example: tool developers, security researchers.
For users who wish to see all available information for the CWE/CAPEC entry.
For users who want to customize what details are displayed.
×
Edit Custom FilterFaulty finite state machines (FSMs) in the hardware logic allow an attacker to put the system in an undefined state, to cause a denial of service (DoS) or gain privileges on the victim's system. The functionality and security of the system heavily depend on the implementation of FSMs. FSMs can be used to indicate the current security state of the system. Lots of secure data operations and data transfers rely on the state reported by the FSM. Faulty FSM designs that do not account for all states, either through undefined states (left as don't cares) or through incorrect implementation, might lead an attacker to drive the system into an unstable state from which the system cannot recover without a reset, thus causing a DoS. Depending on what the FSM is used for, an attacker might also gain additional privileges to launch further attacks and compromise the security guarantees. ![]() |