“My GPU is throttling” is not a fault report A monitoring dashboard shows that an accelerator’s clock frequency has dropped under sustained load. The temperature is high. The line in the runbook reads “thermal throttling.” The instinct is to escalate it as a hardware problem to investigate. Usually it isn’t a hardware problem — it is the silicon working exactly as designed, applying a protection mechanism that the vendor built into the firmware specifically to keep the device inside its physical operating envelope under conditions the device was engineered to handle this way. Whether thermal throttling is a problem depends entirely on what the workload’s expectations are. As an exception state, it is rare. As a steady-state governor of sustained performance, it is normal — and benchmark numbers measured before throttling engages and after thermal equilibrium is reached can differ substantially. The methodological consequence is that a benchmark report that does not declare which thermal regime it measured in cannot inform a deployment decision about a thermally-constrained system. Why thermal throttling matters for any sustained AI workload Thermal throttling is the designed reduction of clock frequency or operating voltage that an accelerator’s firmware applies once on-die temperature crosses a vendor-defined threshold. It is the mechanism by which the silicon protects itself from operating outside its physical envelope. Several aspects of that definition matter operationally. Firmware-implemented, not OS-implemented. Throttling decisions are made by the device’s firmware in response to on-die temperature sensors, not by the operating system or the framework. The OS sees the consequences (lower clocks reported, lower throughput) but does not control the threshold or the response curve. Threshold-driven, not gradual. Most modern accelerators implement multiple throttle thresholds — a soft one that begins gradual frequency reduction, a harder one that triggers more aggressive frequency cuts, and an emergency one that powers down the device. Throttle behavior is a step function, not a smooth roll-off, even if the visible effect under varying load looks gradual. Designed to be reached. The thresholds are calibrated to the silicon’s safe operating region, which means the device is engineered to throttle rather than damage itself. Reaching the throttle threshold is a designed outcome, not an exceptional event. Distinct from power throttling. Power-budget enforcement (PL1/PL2 on Intel, vendor-specific power caps on GPUs) is a related but distinct mechanism: it limits the device based on a power budget rather than temperature. Both can engage simultaneously. Reports that conflate them produce ambiguous diagnostics. Why throttling is normal under sustained workloads The thermal envelope of an accelerator is a function of its power dissipation, its heatsink and airflow, and the ambient conditions of its enclosure. A device under sustained load dissipates roughly its TDP — the thermal design power the cooling system was sized for. If the cooling system is sized to keep the device below the throttle threshold under sustained TDP, the device runs at its full clock indefinitely. If the cooling system is undersized for sustained TDP — common in dense data-center configurations, common in edge deployments, common in laptops — the device will reach the throttle threshold and stabilize at a lower clock that the cooling system can sustain. The latter case is not a failure mode. The device is working as designed: it is using its throttle mechanism to stay inside the operating envelope the cooling solution can support. The throughput at the throttled clock is the throughput that hardware-and-cooling combination can actually sustain, and any benchmark number measured before the device reached this equilibrium overstated what the deployment will see. The pattern that matters for benchmark interpretation is that transient peak performance is a property of the silicon at uncooled startup; sustained practical performance is a property of the silicon-plus-cooling system at thermal equilibrium, and these can differ substantially. The gap between them is exactly the gap between “what the spec sheet says” and “what production will actually run at.” When throttling is a problem Throttling becomes a problem — actionable as a system fault — when: The throttle is engaging at temperatures below the vendor threshold, indicating a sensor calibration issue or a damaged thermal interface. The throttle is engaging at unexpectedly low ambient temperatures, indicating undersized or failing cooling infrastructure. The sustained throttled clock is lower than the deployment’s capacity plan accounted for, indicating either the cooling spec was wrong or the workload’s effective TDP was higher than estimated. The throttle behavior is intermittent in a way that produces bursty latency tails, indicating cooling system instability rather than steady-state operation. In all of these cases, the throttle is the symptom, not the underlying issue. Investigating the cooling, the thermal interface, the airflow, or the workload’s effective power profile is what addresses the problem. Disabling the throttle (where vendors permit it) does not address the problem; it removes the protection mechanism while leaving the cause in place. What throttling implies for benchmark interpretation The methodological implication is concrete: a benchmark on a thermally-constrained system has to declare its thermal regime, because the regime determines what the result represents. Pre-throttle / cold-start measurements describe the silicon’s transient peak. These numbers are the upper bound on what the device can do in the first seconds of a workload. They are useful for characterizing burst capacity and as the silicon’s nominal capability. They are not useful for predicting sustained throughput. Post-throttle / thermal-equilibrium measurements describe the (silicon + cooling) system’s sustained capability under continuous load. These are the numbers that predict deployment behavior under continuous workloads — training runs, sustained inference traffic, long-running batch jobs. A benchmark report that does not state which regime it measured in implicitly invites the reader to assume one — usually the more favorable one — and produces a number whose generalization to the reader’s deployment depends on a regime match the report did not establish. Measurement regime What it characterizes Useful for Pre-throttle / first seconds Silicon’s transient peak Burst capacity, nominal capability During throttle transition Mixed; difficult to interpret Diagnostic only Post-throttle / thermal equilibrium (Silicon + cooling) sustained capability Capacity planning, sustained workload prediction A benchmark protocol that produces interpretable results explicitly defines the warm-up window (long enough for thermal equilibrium to be reached for the device-and-cooling configuration), discards measurements taken during it, and reports the post-equilibrium numbers — and discloses the cooling configuration and ambient conditions that determined where equilibrium was reached. The framing that helps Thermal throttling is a designed protection mechanism, not a hardware fault. It is the silicon’s normal response to operating against the boundary of its thermal envelope, and it is what determines the gap between transient peak performance and sustained practical performance. Benchmark reports must declare which thermal regime they measured in for the result to be applicable to a deployment scenario; numbers measured before equilibrium overstate sustained performance under continuous workloads. Building on power, thermals, and the hidden governors of performance, the operational expression is that thermals are not a footnote to a benchmark, they are part of what the benchmark is measuring, and a methodology that does not account for them is reporting a regime-dependent number as if it were regime-independent. LynxBench AI requires performance to be characterized after thermal equilibrium has been reached for the device-and-cooling configuration under test, and treats the warm-up window and ambient conditions as required disclosure — because the sustained practical performance these conditions produce is what predicts deployment behavior. The question to put to any thermal performance claim is whether the number was recorded post-equilibrium under a declared cooling envelope, or pre-throttle in a configuration the deployment will not reproduce?