A capacity plan built on TDP is a fiction The simplest way to plan power for an AI data center is to multiply the nameplate TDP of each accelerator by the count and add overhead. The number that comes out is wrong in both directions, often by a large margin. Sometimes the deployment draws substantially less than the spec sheet implies, because memory-bound inference workloads do not push silicon to its compute envelope. Sometimes it draws right up to the envelope, because compute-bound training workloads do. The same accelerator inventory can be a different power footprint depending on which workload it actually runs. Power planning that treats TDP as a constant produces capacity numbers that don’t survive contact with production. The framing that does survive treats power draw as workload-conditional and plans around the workload mix and saturation profile, not the nameplate. Why TDP is not deployment power Thermal Design Power is, in vendor specification, the power level the cooling system has to be able to dissipate to keep the device inside its thermal envelope under sustained load. It is a cooling-design parameter, not a guaranteed power consumption. Several aspects of how TDP is reported make it a poor capacity-planning input on its own: TDP is sustained, not peak. Peak instantaneous power can exceed TDP for short periods (boost behavior, transient spikes during workload phase changes). Power-supply sizing has to account for the peaks; TDP alone does not. TDP does not vary by workload. A single TDP number is published per device, but the same device under a memory-bound workload draws less power than under a compute-bound workload. The nameplate captures the upper sustained envelope, not the operating reality. TDP excludes auxiliary subsystems. Memory power, cooling fan power, and local interconnect power are typically not in the device TDP. Whole-server power is meaningfully higher than the sum of accelerator TDPs. TDP is configurable on many platforms. Vendor power-cap mechanisms allow administrators to lower the effective TDP for thermal or power-budget reasons, which changes both the power footprint and the throughput. Using TDP as a capacity-plan input therefore over-estimates power for some workloads and under-estimates whole-system draw for others. Both errors are large enough to invalidate procurement decisions made from them. How power draw varies with workload The dominant pattern in observed AI accelerator power draw is the gap between training and inference workloads on the same hardware. Training workloads, which are typically compute-bound and run sustained large-batch matrix operations, push the device near or at its TDP envelope and sustain that draw for the duration of the run. Inference workloads, which are typically memory-bound for autoregressive models or compute-bound at small scales for vision models, sit at variable points below the envelope depending on the model architecture, batch size, and request profile. A few patterns that recur: Compute-bound training: sustained draw at or near nameplate TDP. Capacity plans that size to TDP are approximately right (modulo auxiliary subsystem overhead). Memory-bound inference (autoregressive token generation): sustained draw substantially below TDP. The accelerator’s compute units idle waiting for memory traffic, and that idle time corresponds to lower power draw. Capacity plans that size to TDP overshoot. Compute-bound inference (small vision models, large batches): draw near nameplate TDP. Similar profile to training in power terms. Mixed workload deployments: time-varying draw following the workload mix. Average draw lies between the bounded extremes; peak draw tracks the most compute-intensive workload running concurrently. The workload-conditionality is not a small effect. The gap between memory-bound inference draw and compute-bound training draw on the same hardware can be large enough that a deployment sized on the wrong assumption is either substantially over-provisioned (wasted power capacity, wasted cooling capacity, wasted capital) or substantially under-provisioned (cannot run its compute-bound peak workload). What a workload-conditional capacity plan looks like A power capacity plan that survives production replaces “TDP × N” with a workload-conditioned model. The components: Workload-mix declaration. What fraction of the accelerator inventory runs training vs inference, and what fraction of inference is compute-bound vs memory-bound. This determines the sustained-draw distribution across the fleet. Per-workload measured draw. Not vendor specification, but actual observed power draw on the (accelerator + workload + executor stack) combinations the deployment will run. This is a measurement input, not a calculation input. Peak-vs-sustained separation. Power-supply sizing accounts for the peak; cooling sizing accounts for the sustained envelope; capacity planning for capital and operating cost accounts for the average draw weighted by utilization. Auxiliary-subsystem overhead. Memory subsystem, fans, host CPU, networking, and PSU efficiency losses, which collectively can add 30-50% to accelerator-only power for a complete server. Headroom for workload growth. Because power infrastructure (PDUs, transformers, cooling) is built ahead of utilization, the plan needs to account for where workload mix is projected to shift, not where it sits today. The output of such a plan is not a single power number; it is a sustained envelope (for cooling and continuous-draw planning), a peak envelope (for power-supply and circuit sizing), and an expected average (for cost forecasting). Each is a different number, and each is informative for a different procurement decision. Why this matters for benchmark interpretation A benchmark that reports performance per watt is reporting a ratio whose denominator is workload-conditional in the same way as the throughput in the numerator. The “watts” in performance-per-watt are the watts the device drew under the benchmark’s workload, not its nameplate TDP. Two benchmarks of the same accelerator on different workloads can produce different performance-per-watt figures because the watts denominator shifted, not just the throughput numerator. Building on power, thermals, and the hidden governors of performance, the operational expression is that power is not a constant property of the device — it is a property of the (device + workload + saturation point) system, and any planning or benchmarking that treats it as constant is conflating regimes that behave differently. Power-envelope checklist Use this checklist to decide whether a candidate AI hardware power figure is usable as a capacity-plan input: Workload disclosed. The figure is paired with the workload that produced it (model, batch, concurrency, precision), not reported as a device property. Sustained, not transient. The measurement was taken after the device reached thermal equilibrium, not during a short burst. Saturation regime named. Whether the figure represents idle, partial-load, or saturated draw is stated explicitly. Auxiliary subsystems counted. Host CPU, NIC, memory, and cooling overhead are included in the envelope or quantified separately. Three numbers, not one. Sustained envelope, peak envelope, and expected average are reported separately for the three different procurement decisions they support. A figure that fails any item is a nameplate or a snapshot, not a capacity-plan input. The framing that helps AI data center power planning treats power draw as workload-conditional, separates peak from sustained from average, accounts for auxiliary-subsystem overhead, and is built on measured per-workload draw rather than nameplate TDP. The capacity plan is not a single number; it is three envelopes used for three different procurement decisions. LynxBench AI treats sustained performance and sustained power draw as paired measurements on the same AI Executor under the same workload — because performance-per-watt and capacity-plan inputs are both workload-conditional, and a methodology that holds the workload fixed while measuring both produces inputs that survive into production while a methodology that assumes nameplate constants does not.