“Latency” without a domain isn’t a measurement The word “latency” appears in performance reports across networking, storage, databases, web services, and AI inference, and the assumption that it means the same thing everywhere is the source of a surprising amount of cross-team miscommunication. In each domain, latency is the elapsed time between two events — but which two events, and what the workload that produces them looks like, differs enough that the numbers are not comparable across domains and not interchangeable within a benchmark report. For AI inference, latency has a specific operational meaning. Pinning it down — and distinguishing it from the latency definitions used in adjacent domains — is the prerequisite for reading or producing useful inference benchmark results. What is latency in AI inference, precisely? Latency in AI inference is the elapsed wall-clock time from the arrival of an input request at the inference service to the completion of the corresponding output, measured per request, under a declared batch size, concurrency level, and request arrival pattern. Three things are notable about that definition. It is per-request. A single inference latency number describes one request. A batch of requests has many latencies, not one. Reporting “the latency” of a batched system without specifying which request in the batch (or which percentile across requests) is under-specified. It includes everything between the two events. Queue time, model-load time (if not amortized), framework dispatch, kernel execution on the accelerator, post-processing, and serialization back to the client are all inside the latency envelope. Reports that name only the kernel-execution component as “latency” are reporting a model-execution time, not an inference latency. It depends on conditions, not just the model and hardware. Batch size, concurrency, and request arrival distribution change the latency the same model produces on the same accelerator. Changing any of these without re-stating the configuration changes the number being reported. How AI inference latency differs from latency in other domains Domain What latency is What it depends on Networking Round-trip transit time of a packet between endpoints Distance, link bandwidth, queueing, protocol overhead Storage Time from I/O request to I/O completion Queue depth, service time at the storage device, caching layer Database query Time from query submission to result return Query plan, index hit/miss, lock contention, IO subsystem Web service Time from HTTP request to response received Application processing + downstream calls + network legs AI inference Time from request arrival to inference output completion Batch size, concurrency, model size, precision, executor saturation The numerical units (typically milliseconds) are the same. The physical quantities they describe are different. A networking latency of 5 ms and an inference latency of 5 ms are not comparable as “system performance”; they are reporting on different operations against different infrastructure with different governing dynamics. A benchmark report that mixes these without scoping each — for example, claiming an “end-to-end latency” of N ms without separating the network leg from the inference leg — is folding incommensurable quantities into a single number that no reader can decompose. Why a single average latency under-specifies AI inference For AI inference specifically, the request-to-request variation in latency under load is large enough that a single mean or median number is inadequate as an operational measurement. The reasons are mechanical: Batch effects. When the inference server batches requests, the latency a request experiences depends on where in the batch window it arrived. The first request in a forming batch waits for the batch to fill or the timeout to fire; the last request in a forming batch experiences near-zero queue time but the same kernel execution. Concurrency effects. Under sustained concurrent load, queue depth fluctuates, and request latencies spread accordingly. Average latency under a load pattern hides the worst-case behavior the system is exposed to. Saturation effects. As load approaches the AI Executor’s saturation point, latency distributions become heavy-tailed: a small fraction of requests experience much larger latencies than the median while the median moves only slightly. The minimum useful reporting unit for AI inference latency is therefore a percentile distribution under declared load conditions: p50, p95, p99 — and frequently p99.9 for latency-sensitive systems — at a stated batch size, concurrency, and arrival distribution. A single average number under load is structurally incapable of expressing what a latency-sensitive deployment needs to know about the system. The strategic argument lives in throughput vs latency trade-offs; operationally, the trade-off between the two metrics is governed by the latency distribution, not by a point estimate of latency, and benchmarks that report point estimates leave the trade-off un-evaluable. What disclosure makes an AI latency number meaningful A latency number for AI inference becomes interpretable when the report names: The model and its size. The precision regime of the inference (FP32 / FP16 / BF16 / INT8 / FP8 / quantization scheme). The AI Executor (accelerator + driver + runtime + framework + inference runtime). The batch size policy (static, dynamic with timeout, continuous batching). The concurrency level under which latency was measured. The request arrival distribution (closed-loop / open-loop / specific load shape). Which percentiles are reported (mean alone is insufficient). Whether warm-up was excluded and how long the measurement window was. A latency report that satisfies this list is informative. A latency report that names a number without these dimensions is reporting on an unspecified executor under unspecified conditions, and any reader who tries to compare it to their own deployment is comparing a known thing against an unknown thing. The framing that helps Latency for AI inference is the per-request, end-to-end elapsed time from request arrival to output completion under a declared batch, concurrency, and load configuration — and it is a different physical quantity than network, storage, database, or web latency. A useful AI latency report names percentiles under declared conditions, not an average without context. LynxBench AI treats latency as a distribution measured under disclosed batch, concurrency, and arrival conditions on a fully-specified AI Executor — because point-estimate latency under unspecified load is structurally incapable of informing the deployment decisions latency-sensitive systems exist to make.