“Nameplate throughput × instance count” is a fiction The simplest way to size an inference fleet is to take the vendor-quoted throughput for the accelerator, multiply by the number of instances, and call it the fleet’s capacity. The number that comes out has no relationship to what the fleet will actually serve under production conditions. The vendor number is a peak measurement at conditions that maximize the metric. The production deployment runs at a different operating point — bounded by the latency SLO, exposed to bursty traffic, sharing memory and IO with the host — and the throughput available at that operating point is substantially lower, and a different number, than the nameplate. A fleet sized on nameplate arithmetic is over-counted, sometimes by a large factor. The number of instances actually required to meet an SLO at production traffic is determined by the AI Executor’s saturation behavior under the production workload, not by the throughput vendors quote at conditions that don’t apply. How is AI inference capacity planning different from web-service capacity planning? Web service capacity planning has well-understood mechanics. A request consumes a small, bounded slice of CPU and memory; the host can run many concurrent requests; capacity scales near-linearly with instance count until the next-tier resource (database, downstream service, network) saturates. The arithmetic is roughly “request budget per instance × instance count.” AI inference capacity planning has different mechanics: The unit of work is large and variable. A single inference can occupy the accelerator for hundreds of milliseconds; a token-generative request occupies it for the duration of generation. Concurrency on a single instance is bounded by batch policy and memory, not by lightweight request multiplexing. Throughput is not linear with batch. Larger batches raise throughput sublinearly and raise per-request latency. The operating point that maximizes throughput is rarely the operating point that meets the SLO. The bottleneck is the executor, not the host. Adding hosts behind a load balancer scales capacity; adding requests to a single host beyond its saturation point degrades latency without raising throughput. Saturation behavior has a knee. Below the knee, latency rises slowly with load; above the knee, it rises sharply. Sizing for “average load” without accounting for the knee produces a fleet that meets the SLO at the average and fails it during normal demand variation. Headroom must absorb bursts and partial failures. A fleet sized at 100% of measured capacity has no margin for traffic spikes or for one instance going offline. The arithmetic that fits these mechanics is not “throughput × count.” It is saturation-curve fitting to projected demand under SLO and headroom constraints — which is a different kind of calculation entirely. The inputs an inference capacity plan actually needs The inputs to a defensible inference capacity plan come from measurement, not from spec sheets: Per-workload throughput-vs-latency curve on the production AI Executor (accelerator + driver + runtime + framework + inference runtime + precision regime). The curve is traced by sweeping batch size and concurrency. The SLO operating point — typically expressed as p99 (or p99.9) latency below a budget. This selects a point on the curve below which the executor’s effective capacity is bounded. Per-workload demand forecast — projected request rate over the planning horizon, including expected diurnal and weekly patterns, expected growth, and expected bursts above the average. Headroom policy — the fraction of per-instance capacity reserved for traffic spikes, scaling latency, and partial failures (a typical convention is 25–40% headroom; the right number depends on burst behavior and recovery time). Recovery margin — capacity to absorb the load redistribution when one or more instances go offline (n+1, n+2 redundancy depending on availability target). The first input is the one most often missing. Vendor numbers are not a substitute. Synthetic benchmarks at vendor-published configurations are not a substitute. The required input is the throughput-vs-latency curve on the production executor running the production workload, because both axes shift when either changes. A capacity-sizing checklist A capacity plan that survives production should satisfy the following: Workload identified — model, model size, precision regime, expected input shape distribution. AI Executor identified — accelerator + driver + runtime + framework + inference runtime + precision regime + batch policy. Throughput-vs-latency curve measured — not extrapolated from a single point; not adopted from vendor literature without re-measurement on the production executor. SLO operating point selected — p99 or p99.9 budget identified; effective per-instance throughput at that point read off the curve. Demand forecast — average request rate, peak-to-average ratio, expected growth, expected burst behavior. Headroom policy applied — fraction of effective capacity reserved as buffer; documented rationale. Redundancy margin applied — n+k instances above the working set to absorb partial failures. Re-measurement schedule — when the curve will be re-measured (typically after model version changes, precision changes, framework upgrades, or thermal regime changes). Diurnal/weekly variance handled — is the fleet sized for peak, with auto-scale below; or sized for average with overflow? The choice changes the cost profile. Cost model linked — accelerator cost, host cost, networking cost, and energy cost projected from the fleet size. A plan that satisfies this list produces a fleet sizing that survives the conditions production imposes. A plan that elides items produces a sizing that holds only as long as no condition shifts — which is a stronger assumption than AI workloads usually justify. Why a single point estimate of throughput is insufficient The reason point-estimate throughput cannot ground capacity is that the operational risk an inference fleet exists to manage is not in the average; it is in the latency tails and burst response. A fleet sized at “average demand × headroom factor” using a single throughput number can be: Right on average but unable to absorb a 2× burst because per-instance saturation behavior produces a sharp latency knee the average couldn’t see. Right on the median but failing p99 SLOs because the throughput number was a mean-latency throughput rather than an SLO-bounded throughput. Right at the moment of measurement but wrong six months later because the workload mix shifted and the per-workload throughput on the same hardware shifted with it. The latency-budget-conditioned throughput distribution is what bounds the operational risk. A capacity plan that uses the distribution as its input — choosing fleet size to keep the SLO-bounded throughput above projected demand at the chosen percentile — is robust to the conditions a point-estimate plan is fragile to. Building on steady-state performance and capacity planning, the operational expression is that capacity is a function of sustained, SLO-bounded throughput on the production executor, not a function of nameplate throughput on a vendor configuration, and the planning practice has to use the right input for the projection to be useful. The framing that helps Production AI inference capacity planning anchors to saturation-curve measurements on the production AI Executor under the production workload, selects the SLO operating point, applies headroom and redundancy policy explicitly, and re-measures when the executor or workload changes. Nameplate-throughput arithmetic produces fleet sizings that don’t survive contact with production. Latency-budget-conditioned throughput distributions produce fleet sizings that do. LynxBench AI treats throughput-vs-latency curves on the production AI Executor as the required input for capacity planning — because the SLO-bounded operating point on those curves is what determines real per-instance capacity, and a fleet sized on that input is robust to the conditions a fleet sized on nameplate throughput is exposed to.