Why LLMs are an unusually good — and unusually risky — quantization target LLM quantization is the practice of replacing the FP16 or BF16 weights of a large language model — and sometimes its activations — with lower-precision representations such as INT8, INT4, or sub-INT4 formats. The point is not to make the model “smaller” in an abstract sense. It is to reduce the number of bytes the accelerator must move from memory for every token generated. LLM inference is overwhelmingly memory-bandwidth-bound rather than compute-bound, which makes any technique that reduces bytes-per-parameter act almost directly as a throughput multiplier. That memory-bandwidth fact is what makes LLMs an unusually attractive quantization target. It is also what makes the accuracy story unusually easy to misread: a benchmark that reports “INT4 quantization with negligible accuracy loss” can be entirely correct on the workload it measured, and entirely misleading for the workload you intend to deploy. Why does memory bandwidth dominate LLM inference? A modern LLM serves a single token by reading every weight in the model from accelerator memory at least once during the forward pass for that token. For a 70B-parameter model in FP16, that is roughly 140 GB of weight data the accelerator must fetch through its memory subsystem before it can emit one token. Even on accelerators with multiple terabytes per second of memory bandwidth, the memory access cost dominates the per-token wall-clock budget for the autoregressive generation phase. Reducing each weight from 16 bits to 8 bits halves the bytes that must be moved. Reducing to 4 bits divides them by four. Because the bottleneck is bandwidth and not arithmetic, the throughput improvement tracks the byte reduction more closely than it would for a compute-bound workload such as image-model training, where the arithmetic dominates and lower precision does less to lift the wall-clock cost. This is why quantization is a first-order economic lever specifically for LLM inference, not because lower precision is intrinsically virtuous. The improvement comes from the fact that LLM inference happens to be exactly the workload shape where bandwidth reduction translates directly into latency and concurrency. Where the accuracy story breaks The accuracy of a quantized LLM cannot be summarized by one number from one benchmark. Quantization accuracy is task-conditional and metric-conditional in ways that aggregate scores conceal. The same quantization scheme that preserves performance on multiple-choice benchmarks — where the model only needs to produce a correct token in a constrained position — can degrade long-form generation, code generation, or behavior on out-of-distribution prompts substantially. Multiple-choice tasks are forgiving: they ask the model to pick the highest-probability completion among a small set of candidates. Long-form generation compounds errors token by token, and small per-token probability shifts at one position can cascade into qualitatively different outputs many tokens later. Code generation is even less forgiving, because syntactic correctness is binary at every token boundary. The implication is that a quantized LLM evaluated on a benchmark suite oriented toward multiple-choice reasoning may report less than one percentage point of accuracy loss while showing materially worse behavior on the workloads that actually matter for a production deployment. The quantization scheme has not failed in any absolute sense — the evaluation has failed to measure the workload that will be deployed. Comparing quantization-evaluation regimes Evaluation regime What it measures What it misses Multiple-choice benchmark accuracy Per-position token correctness in constrained outputs Compounding error in long generations, code, OOD behavior Aggregate perplexity on held-out text Average per-token likelihood across a reference corpus Tail behavior on rare or workload-specific input distributions Long-form generation quality (human or LLM-judged) Coherence and correctness across many-token outputs Latency cost, throughput characteristics under load Workload-matched evaluation Behavior on a sample of the actual deployment workload Higher cost to set up; less directly comparable across reports Workload-matched evaluation is the regime that produces deployment-grade information, which is also the reason it is the least common in published quantization claims. What this means for benchmarking quantized LLMs If LLM quantization is evaluated as a deployment-grade decision rather than as a benchmark line, the evaluation has to disclose more than the quantization format and an aggregate accuracy number. It has to specify which weights and activations were quantized (weight-only, weight-and-activation, or with a separately-quantized KV cache), which calibration data was used, which calibration method, and which evaluation workload — including whether the evaluation included long-form outputs or only constrained tasks. Benchmark reproducibility for quantized LLMs is therefore not a matter of citing a precision format. It is a matter of disclosing the full quantization tool chain and evaluation workload alongside the throughput and accuracy numbers. Two reports of “INT4 quantization with X% accuracy loss” that differ in tool chain and evaluation workload are not comparable, and the difference between them is not noise — it is methodology. The benchmark question for a quantized LLM is therefore not “what accuracy did it lose?” but “on which workload, with which quantization scheme, with which calibration, and with which evaluation harness, did this accuracy number arise?” The framing that actually helps LLM quantization is a deliberate trade between bytes moved per token and the precision of the numerical representations used to compute that token. The trade is favorable for many production workloads because LLM inference is bandwidth-bound, but the favorability is conditional on the deployment workload tolerating the specific accuracy regression that the chosen quantization scheme produces. The same principle holds here as elsewhere: quantization is a controlled approximation — a calibrated, bounded trade, not a one-way degradation, with a deployment cost that is measurable rather than mythological. The LLM-specific point is that the bandwidth bottleneck makes the throughput payoff unusually large, which makes the temptation to under-evaluate the accuracy side unusually strong. A quantization claim for an LLM that does not name its evaluation workload is not wrong because the number is wrong. It is incomplete because the number, on its own, does not describe the deployment behavior the workload owner needs to know. LynxBench AI treats per-precision LLM evaluation as a per-workload, per-tool-chain measurement — with quantization scheme, calibration, and evaluation harness disclosed alongside throughput and accuracy — because the bandwidth-driven payoff of LLM quantization is exactly the regime where accuracy under-disclosure is most economically attractive and most operationally costly.