The single-request best-case trap The easiest latency test to run on an AI inference service is the worst latency test for predicting production behavior: send one request at a time, wait for the response, record the time, repeat. The number that comes out is the AI Executor’s best-case per-request latency on a quiescent system. It is a real number. It is also a number that the production deployment will essentially never see, because production deployments serve concurrent traffic against a continuously-loaded inference server, and the latency distribution under those conditions does not resemble the single-request quiescent case at all. Latency testing for AI inference has to be designed around the conditions production will actually impose, not the conditions easiest to instrument. That requires explicitly varying the axes that govern latency — batch size, concurrency, arrival pattern — and reporting the tail percentiles that latency-sensitive systems care about, not the averages that mask them. Which three axes have to be declared? A latency test that fixes only one variable under-specifies the result. The three axes that any meaningful latency test must declare are independent and interact: Batch size. Whether the inference server processes requests one at a time, in fixed-size batches, in dynamic batches with a timeout, or via continuous batching (for autoregressive models) determines how queue time accumulates and how kernel execution amortizes across the batch. The same model on the same accelerator produces very different latency distributions under each policy. Concurrency level. The number of simultaneous in-flight requests the test sustains determines queue depth, how often batches form at their target size, and how close the system runs to its saturation point. Low concurrency exposes per-request execution time; high concurrency exposes queue-and-saturation behavior; the relationship between them is the system’s load profile. Request arrival distribution. A closed-loop test (where each finished request triggers the next) measures the system’s throughput-bounded behavior. An open-loop test (where requests arrive on a fixed schedule regardless of completion) measures the system’s response under independent load. Production traffic is closer to open-loop with bursty arrivals than to closed-loop, and a closed-loop test systematically understates queue-induced tail latency. A test report that fixes one of these axes and varies the other two communicates how latency depends on the varied axes under a stated condition for the third. A test that varies all three or fixes all three without disclosure produces a number that cannot be interpreted operationally. Tail percentiles, not averages The reason latency-sensitive systems exist is to bound the worst-case latency users experience, not the average. A system whose mean latency is 50 ms and whose p99 is 2 seconds delivers a different user experience than a system whose mean is 80 ms and whose p99 is 150 ms. Average latency does not distinguish them. p99 latency does. The percentiles that latency tests should report at minimum: p50 (median) — the typical request’s experience. p95 — the experience of one request in twenty. p99 — the experience of one request in a hundred. For high-traffic systems this is a non-negligible portion of total user-facing requests. p99.9 — the experience of one request in a thousand. For services with strict SLOs, this is often the controlling number. Reporting the maximum latency observed during the test can also be informative but is sample-size-dependent and noisy; the percentiles above are more stable across runs. A benchmark that reports only mean or median latency systematically hides the operational risk that latency-sensitive systems exist to manage, and a deployment decision made from such a benchmark is uninformed about the regime that actually drives the service-level objective. A latency-testing methodology checklist A latency test that produces a result a deployment team can use should satisfy the following: Workload definition stated: model, model size, precision regime, input shape distribution. AI Executor stated: accelerator + driver + runtime + framework + inference runtime versions. Batch policy stated: static batch size N, dynamic batch with timeout T, or continuous batching with policy parameters. Concurrency level stated: number of simultaneous in-flight requests sustained during the test. Arrival distribution stated: open-loop (with arrival rate) or closed-loop, with any bursty/Poisson/uniform parameters. Warm-up window excluded: the first N seconds (long enough for thermal equilibrium and any one-time framework initialization) discarded from measurement. Measurement window long enough for thermal equilibrium: typically minutes, not seconds, for sustained-load representativeness. Percentiles reported: at minimum p50, p95, p99; for strict-SLO contexts, p99.9. Throughput reported alongside: so the latency numbers are scoped to a specific operating point on the throughput-vs-latency curve. Number of trials and inter-trial variance reported: to distinguish stable measurements from noisy ones. Co-tenant load disclosed: whether the host was otherwise quiet or under realistic background load. A test that satisfies this list produces a result the reader can apply to their own deployment decision. A test that satisfies a subset produces a result whose generalization is bounded by what’s missing. How latency testing relates to throughput testing Latency testing and throughput testing are not separate concerns; they are the two axes of the same operating curve. Every (batch, concurrency, arrival) configuration produces both a latency distribution and an aggregate throughput, and the trade-off between them is the curve the system can traverse. A complete latency-test report sweeps the configuration space and produces a curve, not a point: throughput on one axis, p99 latency on the other, and the curve traced by varying batch and concurrency. The deployment decision then becomes “where on this curve should we operate?” instead of “is this system fast enough?” — which is the question latency benchmarks should be designed to answer. Building on throughput vs latency trade-offs, the operational expression is that the two metrics are coupled by the system’s saturation behavior, and any methodology that measures one without bounding the other is producing a number divorced from the trade-off it sits in. The framing that helps Latency testing for AI inference must declare batch policy, concurrency level, and arrival distribution; it must report tail percentiles, not averages; and it should produce a curve across the operating space, not a single point estimate. A best-case quiescent number is real but operationally unrepresentative; a percentile distribution under sustained, declared load is the minimum useful unit for a deployment decision. LynxBench AI treats latency as a distribution under declared batch and concurrency configurations against a fully-specified AI Executor, and treats tail percentiles — not the average — as required disclosure, because the latency-sensitive deployment decisions the methodology exists to inform are governed by tails, not means. The question to put to any latency claim before relying on it is whether the number is a tail percentile under realistic load, or a best-case quiescent average that production conditions will not reproduce?