“Quantized via Hugging Face” is not a single thing Hugging Face quantization is shorthand for a small ecosystem of tools — bitsandbytes, AutoGPTQ, AutoAWQ, the transformers library’s built-in quantization integration, GGUF artifacts produced for llama.cpp-style runtimes — that each implement different quantization schemes. Two models published as “INT4 quantized via Hugging Face” can have substantially different accuracy and throughput profiles depending on which tool produced them. A benchmark that quotes the precision but omits the tool chain is under-specifying its result, and the gap is not minor. This matters specifically for benchmark interpretation: the same nominal precision, applied through different tools, produces different actual numerical behavior because the schemes differ in bit width, calibration procedure, scale-factor granularity, and runtime kernel implementation. What do the major Hugging Face quantization tools actually produce? The Hugging Face ecosystem exposes four broad families of quantization tooling, and each family makes a different set of design choices. bitsandbytes focuses on weight-only quantization with on-the-fly dequantization in the matrix multiplication path. Its INT8 path uses LLM.int8() — a mixed-precision scheme that keeps a small fraction of outlier columns in higher precision and quantizes the rest. Its 4-bit path (NF4 / FP4) uses block-wise quantization with small block sizes. The runtime cost of dequantization is paid per matrix multiplication, which makes the throughput profile sensitive to batch size. AutoGPTQ implements the GPTQ scheme: post-training quantization of weights with per-column error compensation using a calibration set. The output is a quantized model whose weights are stored in a packed low-precision format and processed through GPTQ-aware kernels. Calibration data shape and size affect the resulting model substantially — different calibration corpora produce different quantized weights. AutoAWQ implements activation-aware weight quantization (AWQ): identification of salient weight channels based on activation magnitudes from a calibration pass, then per-group quantization with the salient channels protected. Its design assumption is that a small fraction of weights are responsible for most of the model’s output behavior, and protecting those weights at higher effective precision preserves accuracy more efficiently than uniform quantization. GGUF is a serialization format used primarily by llama.cpp-derived runtimes. GGUF supports a family of quantization schemes (Q4_K, Q5_K, Q6_K, Q8_0, and others) that vary in bit width, block structure, and quantization granularity. A “GGUF quantized” model is therefore parameterized further by which GGUF scheme it uses — the format alone does not specify the numerical behavior. These tools are not interchangeable. Their schemes differ in what they preserve, how they calibrate, and what runtime kernels they require. Comparing what each Hugging Face quantization tool produces Tool Quantization approach Bit widths Calibration Runtime characteristics bitsandbytes Mixed-precision INT8 (LLM.int8) or block-wise NF4/FP4 8-bit, 4-bit Outlier detection in INT8; block-wise statistics in 4-bit Dequantization in matmul path; throughput sensitive to batch size AutoGPTQ Post-training weight quantization with per-column error compensation 2- to 8-bit Required; calibration set affects resulting weights Custom GPTQ-aware kernels; weight-only quantization AutoAWQ Activation-aware weight quantization with salient-channel protection 4-bit (typical) Required; activation magnitudes drive salient-channel selection AWQ-aware kernels; weight-only quantization GGUF Family of block-wise quantization schemes 2- to 8-bit (Q2_K through Q8_0) Per-block statistics; some schemes use importance matrices llama.cpp-style runtimes; CPU and GPU kernels both common Two “INT4 quantized via Hugging Face” reports that come from different rows of this table are reporting on different artifacts, even if the bit width matches. Why benchmark disclosure has to name the tool chain A benchmark of a quantized Hugging Face model under-specifies the AI Executor unless it names which tool produced the quantized weights, which scheme parameters were used, and which calibration data drove the calibration step. Without those, the throughput numbers reported for “a quantized model X” cannot be reproduced, and the accuracy numbers reported beside them cannot be compared across reports — because the same nominal precision can correspond to different actual numerical behavior depending on which tool produced the model. This is not a peripheral disclosure issue. The choice of quantization tool changes the runtime kernel that gets invoked, which changes the throughput measurement. It changes the calibration procedure, which changes the accuracy measurement. And it changes the bit-packing layout, which changes the memory footprint measurement. All three of the headline numbers a benchmark typically reports — throughput, accuracy, footprint — depend on the tool choice. Bounded optimization in benchmarking — the principle that benchmark results are only comparable when the optimization effort applied to the system under test is named and bounded — therefore extends to the quantization tool chain. A benchmark that bounds optimization to “Hugging Face INT4” without naming the tool is bounding optimization to a region that contains substantially different artifacts. What a deployment-grade Hugging Face quantization benchmark must report For a quantized-model benchmark to support a deployment decision rather than just a relative score, the disclosure has to cover the dimensions that actually determine the result: The specific tool used (bitsandbytes / AutoGPTQ / AutoAWQ / GGUF or other) The scheme parameters (group size, block size, bit width, scaling type) The calibration data set and size The runtime kernel and runtime version The hardware on which the throughput was measured The accuracy evaluation set, including whether it includes long-form outputs A report missing any of these dimensions is informative about what was measured under unstated conditions, and uninformative about whether a re-run on a different but nominally equivalent setup would produce comparable results. The framing that actually helps Hugging Face quantization is best understood as a parameterized family of quantization tooling, not as a single “Hugging Face quantizes the model” operation. The general principle that quantization is controlled approximation rather than damage holds across the family, but the specific approximation each tool produces differs — and benchmark interpretation has to follow the tool, not the brand. LynxBench AI treats the quantization tool chain as part of the AI Executor specification — alongside the hardware, runtime, and framework — because the per-precision performance and accuracy a benchmark reports are properties of the full stack, and the quantization tool is not a detachable layer on top of a hardware result.