A throughput number without context is not a throughput number A vendor cites a throughput figure for an AI accelerator: requests per second, tokens per second, images per second. The number is large. It seems to characterize the device. It does not. The same accelerator on the same model produces different throughput numbers under different batch configurations, different precision regimes, and different operating points on the latency curve — and a throughput figure that names none of these is reporting a peak observation, not a comparable measurement. For AI inference, throughput has a precise definition that ties it inseparably to the batch policy and the latency budget under which it was measured. Reporting throughput without those dimensions is the inference equivalent of reporting peak silicon TFLOPS as application performance: technically a real number, operationally uninformative. What is throughput in AI inference, precisely? Throughput in AI inference is the rate of completed inference requests (or, for token-generative models, generated tokens) per unit wall-clock time, measured under a declared batch size, concurrency level, and latency budget on a fully-specified AI Executor. Each component of that definition is non-optional. Rate of completed work, not initiated work. Throughput counts what came out the other side. Started-but-not-completed requests do not contribute. This matters under heavy load: a server that accepts more requests than it can complete inflates a “requests per second” measurement that doesn’t survive into the response stream. Per unit wall-clock time. The denominator is real time, not CPU time or GPU-active time. A measurement that excludes queue time or framework overhead is reporting kernel throughput, not inference throughput. Under a declared batch size. The same model on the same accelerator can produce dramatically different throughput at batch=1 vs batch=8 vs batch=64. The throughput number is a function of the batch policy, not a property of the device alone. At a declared latency budget. Throughput can almost always be raised by increasing batch size — at the cost of per-request latency. A throughput number untethered from a latency budget can be optimized arbitrarily by accepting arbitrarily bad latency, which is why the latency budget is part of the throughput report, not a separate concern. On a fully-specified AI Executor. Accelerator hardware, driver, runtime, framework, inference runtime, and precision regime all enter the throughput a measurement will produce. Why batch size is inseparable from throughput Modern AI accelerators are designed to process work in batches because per-batch overhead amortizes across the batch’s items. Larger batches generally raise throughput (more items per unit kernel time) and raise per-request latency (each item waits for the batch to form and complete). The relationship is not linear in either direction, and it has a saturation point: beyond some batch size, throughput stops growing because some other resource — memory, scheduler, kernel-occupancy — becomes the bottleneck. The practical consequence is that “throughput at batch X” and “throughput at batch Y” are different numbers describing different operating points of the same system. Comparing the throughput of one accelerator at its optimal batch to the throughput of another accelerator at a different batch is not a hardware comparison; it is a comparison between two operating points of two different systems. A throughput number that doesn’t name its batch is comparable to nothing. A throughput number that names its batch is comparable only to throughput numbers measured at the same batch on the same workload — and even then, only if the latency at that batch is acceptable for the deployment scenario. Throughput and latency are coupled, not independent The throughput-vs-latency curve is the structure that links the two metrics. For a given AI Executor and a given workload, every batch/concurrency configuration produces both a throughput and a latency distribution. The curve traced by sweeping configurations is the system’s operating envelope. Three points on that curve illustrate why throughput in isolation is uninformative: Operating point Throughput p99 latency Useful for Single request, no batching Low Low Latency-critical small workloads Optimal batch for throughput High High (often unacceptably so) Offline / batch-mode workloads Highest batch under latency SLO Moderate-to-high Bounded by SLO Online inference services The vendor-quoted “peak throughput” is typically the middle row. The number a deployment team needs is the third row. They are different points on the same curve, and they describe different operational realities. The framing that follows from this is to report throughput at a declared latency budget — for example, “X requests per second at p99 ≤ 100 ms” — rather than throughput in isolation. This bounds the trade-off explicitly and produces a number a deployment team can apply. Throughput vs bandwidth: a related distinction A common adjacent confusion is between throughput and bandwidth. Bandwidth measures the rate at which data can be moved through a channel (memory bus, network link, storage interface) and is typically reported in bytes per second. Throughput in inference measures the rate of completed work and is reported in requests, tokens, or items per second. Bandwidth is an upper bound on the work throughput a memory-bound workload can sustain — but it is not the same number. A workload bottlenecked by memory bandwidth will exhibit throughput proportional to the bandwidth available to its access pattern; a workload bottlenecked by compute will exhibit throughput unrelated to nominal bandwidth. Reporting bandwidth and calling it throughput conflates the upper-bound resource with the work-rate measurement, and a benchmark report should keep them lexically and methodologically separate. What disclosure makes a throughput number useful A throughput number for AI inference becomes interpretable when the report names: The model and its size. The precision regime of the inference. The AI Executor (accelerator + driver + runtime + framework + inference runtime versions). The batch policy (static N / dynamic with timeout / continuous batching) and the batch size at which the throughput was measured. The concurrency level under which the measurement was sustained. The latency budget under which the throughput was achieved (e.g. p99 ≤ X ms). The duration of the measurement window (long enough for thermal equilibrium, with warm-up excluded). A throughput report that satisfies this list characterizes an operating point on the system’s throughput-vs-latency curve. A throughput report that names a single number is a peak observation under unspecified conditions, and the inference into deployment is the reader’s problem. The framing that helps Throughput for AI inference is the rate of completed work per unit wall-clock time under a declared batch policy, concurrency, latency budget, and AI Executor. It is coupled to latency by the trade-off curve the system traces; it is bounded by — but not equal to — the bandwidth of the resources it depends on; and it is uninformative as a single number divorced from the operating-point disclosure that ties it to a deployment scenario. LynxBench AI treats throughput as a function of batch and concurrency at a declared latency budget on a fully-specified AI Executor — because the throughput-vs-latency trade-off is operationally meaningful only when both axes are scoped to the same disclosed operating point. The question to put to any throughput claim is whether the number was measured at the latency budget the SLO actually requires, or at a more permissive operating point that the deployment will not reproduce?