A different memory pressure than weight quantization addresses KV-cache quantization is sometimes discussed as if it were just another flavor of LLM quantization, alongside weight quantization. It is not. It addresses a different memory pressure, has a different accuracy risk profile, and answers a different deployment question. Treating the two as interchangeable produces evaluation mistakes that show up only in production. Weight quantization reduces the static memory footprint of an LLM — the bytes that hold the model’s parameters and that are loaded once per inference server. KV-cache quantization reduces the per-request key/value tensors that the model accumulates during generation, and that grow linearly with context length and request concurrency. The two compress different things, and the operational lever each gives the serving engineer is different. What does KV-cache quantization actually unlock? During autoregressive generation, an LLM stores the key and value projections for every previously processed token, keyed by attention head and layer. This is the KV cache. Its size scales as roughly 2 × layers × heads × head_dim × bytes_per_value × context_tokens per concurrent request. For a long context — tens or hundreds of thousands of tokens — and several concurrent requests, the KV cache becomes the dominant consumer of accelerator memory, exceeding the weight footprint of the model itself. When KV-cache memory becomes the binding constraint, the practical consequences are immediate: the maximum supported context window shrinks, the maximum number of concurrent requests collapses, or both. KV-cache quantization addresses this directly. Storing keys and values in INT8, INT4, or FP8 instead of FP16 reduces the per-token cache footprint by 2× to 4×, which translates into proportional increases in either supported context length or supported concurrency on a fixed-memory accelerator. This is a different optimization axis than weight quantization. Weight quantization makes the model fit on a smaller accelerator, or makes per-token inference faster by reducing weight-load bandwidth. KV-cache quantization makes longer contexts or higher concurrency fit on the same accelerator. They are complementary, not substitutes. Why the accuracy risk profile is distinct The accuracy story of KV-cache quantization is not a smaller version of the accuracy story of weight quantization. It is structurally different. Weights are static. Once a model is trained, its weight distributions are fixed. A calibration procedure for weight quantization observes those distributions on a representative input set and chooses scale factors that bound the quantization error tightly across known weight values. The quantization error introduced into the model is fully determined at calibration time and is constant across all inferences thereafter. KV-cache values are activations, not weights. They are produced at runtime, conditional on the input prompt and on every previously generated token. Their distributions are workload-dependent, and they can exhibit large outliers — single attention positions whose key or value norms are many standard deviations above the typical range. Low-precision integer formats represent outliers poorly, because the format’s representable range must be set wide enough to cover them, which leaves the typical values represented at coarser-than-necessary granularity. The implication is that KV-cache quantization can produce one accuracy profile on a benchmark whose prompt distribution rarely produces large outliers, and a substantially worse profile on a deployment workload whose prompts routinely do. The gap is not noise — it is a real difference in the input distribution the cache observes. Comparing weight quantization and KV-cache quantization Dimension Weight quantization KV-cache quantization What is quantized Model parameters (static) Per-request key/value tensors (dynamic) Memory pressure addressed Static model footprint Per-request cache growing with context length Operational lever Fits model on smaller accelerator; reduces per-token bandwidth cost Increases max context length or max concurrency on same accelerator Distribution stability Fixed once trained; calibrated once Workload-dependent; varies per prompt and per token Outlier behavior Bounded by training-time weight distribution Includes runtime activation outliers that low-precision formats represent poorly Calibration validity Calibration data only needs to span weight value ranges Calibration must span the deployment workload’s activation distribution These differences are why a quantized-LLM accuracy report that does not separately disclose whether the KV cache was quantized — and if so, with what scheme and what calibration — under-specifies the result. What this means for evaluation A KV-cache quantization claim cannot be validated by extrapolating from a weight-quantization result on the same model. The two operate on different tensors, with different distributional properties, and their accuracy regressions are not additive in any clean way. Evaluating KV-cache quantization requires running the deployment workload — including its long-context and high-concurrency regimes — and measuring output behavior under those conditions, not extrapolating from short-context standard benchmarks. The evaluation question for KV-cache quantization is also operational rather than purely accuracy-driven: what context length, at what concurrency, does each cache precision support before the accelerator runs out of memory? That number is a property of the deployment configuration, not of the model alone, and it is the number that determines whether KV-cache quantization is the right intervention for a particular memory-pressure problem. The framing that actually helps KV-cache quantization is best understood as a dynamic-tensor compression technique applied at runtime to the activation tensors that grow linearly with context length and concurrency. It addresses a memory-pressure regime that weight quantization cannot, and its accuracy risk profile is determined by activation distributions that calibration must observe in deployment-shaped workloads, not in standard-benchmark prompts. The general principle that quantization is controlled approximation rather than model damage holds for KV-cache quantization as it does for weight quantization. The KV-cache-specific point is that the activation-distribution dependency makes the calibration step strictly more workload-coupled than the weight-quantization equivalent — and that calibration step is what determines whether the bounded approximation stays bounded under the prompts the deployment will actually see. LynxBench AI treats KV-cache precision as a separately reported regime from weight precision — with the calibration workload disclosed — because conflating them in a single “INT4 quantized” label hides exactly the distributional dependency that determines whether the deployment will hold up.