“Quantization loses X% accuracy” is the wrong shape of statement A team considering quantizing a model for deployment looks for a number to plug into the trade-off calculation: how much accuracy will it lose? The literature seems to provide them — papers report accuracy deltas from quantization schemes, vendor materials cite typical losses, framework documentation includes example numbers. The problem is that none of these numbers transfer reliably to the team’s specific situation. Quantization accuracy loss is task-dependent, model-dependent, and metric-dependent, and a single percentage figure abstracted from these dimensions is a number whose generalization is bounded to the original measurement context. The implication is not that quantization is unpredictable; it is that the prediction has to be made on the team’s actual workload, not extrapolated from someone else’s numbers. The structure of why this is so — and what evaluation has to measure before deployment — is the operational content of getting quantization right. What are the three axes of variability in quantization accuracy loss? The same quantization scheme produces different accuracy losses along three independent axes: Task. Different tasks have different sensitivities to numerical perturbation. Image classification often tolerates aggressive quantization with negligible accuracy loss, because the decision boundary in the model’s output space is robust to small perturbations. Retrieval tasks (where the model produces an embedding that has to match against a database) can be substantially more sensitive, because small embedding shifts move many items across the retrieval threshold. Reasoning tasks (chain-of-thought, multi-step arithmetic) can collapse under quantization that other tasks tolerate, because errors compound across reasoning steps. The same quantization scheme produces different accuracy deltas on different tasks because the tasks differ in how they convert numerical perturbation into output error. Model. Within a single task, different models respond differently to the same quantization. A model trained with regularization that produces well-conditioned activations tolerates quantization better than one whose activations have heavy-tailed distributions. A model whose information is encoded redundantly across many parameters tolerates per-parameter quantization noise better than one whose information is concentrated in a few critical weights. Two models that achieve similar pre-quantization accuracy on the same task can have substantially different post-quantization accuracy because their internal numerical structure differs. Metric. Within a single (task, model) pair, different evaluation metrics produce different “accuracy loss” numbers. Top-1 accuracy on a clean test set is one number; calibrated-probability metrics like expected calibration error are a different number; out-of-distribution accuracy is a third number; per-class or per-stratum accuracy is yet another. Switching from one metric to another can change the apparent precision penalty by an order of magnitude — not because the model changed, but because the question being asked changed. The interaction of these three axes produces a space in which a single number cannot characterize the trade-off. Reporting “quantization causes X% accuracy loss” without specifying task, model, and metric is reporting on one corner of this space and inviting the reader to assume the corner generalizes — which it does not. Robustness is a model property, measured per model A useful way to think about quantization tolerance is as a property of the (model, task, quantization scheme) triple, measured rather than assumed. Some triples are robust — the post-quantization accuracy matches pre-quantization closely across the relevant evaluation surface. Some are fragile — small quantization changes produce large accuracy shifts. The robustness is empirical; the literature can suggest which architectures and training regimes tend toward robustness, but it cannot certify a specific (model, task, scheme) triple without measurement. The factors that correlate with robustness: Well-conditioned activations. Models trained with normalization (BatchNorm, LayerNorm, RMSNorm) and with regularization that produces bounded activation ranges tend to quantize well, because the calibration step that maps the floating-point range to the quantized range has a tighter range to map. Redundant parameter encoding. Models whose information is distributed across many parameters tolerate per-parameter quantization noise; models that concentrate information in a few critical weights are more vulnerable. Stable training. Models that train smoothly to convergence (without late-training instability or catastrophic forgetting) tend to have parameter distributions that quantize cleanly; models that train through unstable regions can have outlier weights that the quantization scheme handles poorly. These are tendencies, not guarantees. The actionable piece is to measure the candidate (model, task, scheme) on a representative evaluation set before deploying, because the literature’s tendencies do not relieve the team of the empirical check. What the evaluation rubric has to declare The accuracy-loss number a team uses to make a deployment decision is a function of the evaluation rubric the team applies. The rubric has to be declared explicitly for the number to be interpretable. The minimum disclosure surface: Test set composition. Clean? Out-of-distribution? Stratified by class or scenario? Domain-matched to production? Sample size. Large enough that the metric’s confidence interval is narrower than the precision-loss effect being measured. Metric definition. Top-1 accuracy, F1, calibrated probability, retrieval recall@k, exact-match for reasoning, or composite. Each measures something different. Reference point. Loss relative to what? The full-precision (FP32) baseline? An FP16 baseline? A different quantization scheme? The reference choice changes the number. Per-stratum reporting. Aggregate accuracy can hide failures concentrated in specific strata (rare classes, specific input distributions, edge cases). Calibration step. Was the quantization calibrated, and on what data? Calibration data choice affects the post-quantization distribution-shift behavior of the model. A precision-related accuracy claim that satisfies this disclosure list is interpretable. A claim that says “X% loss” without it is reporting one number from one rubric, and the reader’s situation may produce a different number from a different rubric on the same model. What this means for benchmark methodology Precision benchmarks for AI hardware are commonly reported as throughput at a precision regime. The accompanying accuracy disclosure is often missing, partial, or aggregated to a single number. This is the methodological gap that makes precision benchmarks structurally insufficient for deployment decisions. A precision-aware benchmark methodology has to report the (throughput, accuracy) pair on the workload’s evaluation rubric, with the calibration scheme disclosed and the reference point specified. The team consuming the benchmark then has the inputs it needs to evaluate the trade-off on its own terms. A benchmark that reports throughput in isolation is reporting half of the trade-off, and the half that’s missing is the half that determines whether the throughput gain is operationally usable. Accuracy loss being task-dependent makes the broader case; the operational expression here is that the trade-off is task-dependent, and the benchmark’s role is to expose the trade-off in a form that supports the team’s task-specific decision rather than collapsing it to a single point estimate that hides the dependencies. Accuracy-loss-claim disclosure checklist A quantization loses X% accuracy claim is interpretable only when each of the following is on the page: Task named. The downstream task on which accuracy was measured is identified, not described as general capability. Model named. The base model and the quantized variant are both identified by version, because robustness varies per model. Metric named. The specific scoring rubric (exact-match, F1, BLEU, MMLU subset, calibrated human eval) is declared. Reference baseline named. The X% is relative to a stated baseline (FP32, FP16, vendor reference), not to an unspecified original. Calibration scheme disclosed. The calibration data set and procedure are named, because they affect post-quantization behavior under distribution shift. Per-stratum decomposition reported. Aggregate accuracy is broken out by the strata where degradation concentrates (rare classes, long-tail inputs). A claim that satisfies all six is decision-grade. A claim that satisfies fewer is a result from one rubric on one model, generalizing only as far as that pair. The framing that helps Quantization accuracy loss is a function of the (task, model, metric, calibration) tuple, and a single percentage abstracted from that tuple is a number whose generalization is bounded to the original measurement context. Robustness has to be measured per (model, task, scheme) on a representative evaluation rubric the team has declared explicitly. Precision benchmarks must report accuracy alongside throughput on the same workload to support a deployment decision. LynxBench AI treats per-precision performance and a declared accuracy criterion as inseparable outputs of the AI Executor specification — because the (throughput, accuracy) pair on the team’s actual workload is the trade-off shape a precision-related deployment decision needs. The question to ask of any quantization claim is which axis it is holding implicit — and whether the implicit axis would survive contact with the team’s production workload?