“Just halving the precision” doesn’t describe what FP16 does The intuition that FP16 is FP32 with half the bits is correct as a bit-count statement and misleading as a numerical-properties statement. Going from 32-bit to 16-bit floating-point did not uniformly halve the format’s capabilities. The reduction was unbalanced: FP16 cuts the exponent budget more than half (from 8 bits to 5) and cuts the mantissa less than half (from 23 to 10). The consequence is that the property that’s most reduced — dynamic range — is the property whose loss matters most for typical training workloads. This is why standalone FP16 training is unstable for many workloads where FP32 is comfortable, and why mixed-precision schemes exist to recover the lost dynamic range without paying the FP32 storage cost. Understanding the FP16 format precisely — what its bit allocation provides and what mixed-precision schemes pair it with — is the prerequisite for reading FP16-related performance claims, because “FP16” by itself does not describe a numerical regime; the surrounding scheme does. What is the IEEE-754 half-precision format? IEEE-754 half-precision allocates 16 bits as follows: 1 sign bit · 5 exponent bits · 10 mantissa bits The 5 exponent bits encode powers of two between roughly 2⁻¹⁴ and 2¹⁵. This is the format’s dynamic range — and it is much narrower than FP32’s range of roughly 2⁻¹²⁶ to 2¹²⁷. In practical terms, FP16 cannot represent very small values (they underflow to zero) or very large values (they overflow to infinity) in regions where FP32 has no difficulty. The 10 mantissa bits encode the significant digits. This gives FP16 about three decimal digits of precision for distinguishing values of similar magnitude — substantially less than FP32’s seven decimal digits, but still sufficient for many computations where the relative ordering matters more than the fine numerical differences. The reduction relative to FP32 is therefore not symmetric: Property FP32 FP16 What’s reduced Sign bit 1 1 Unchanged Exponent bits 8 5 Reduced from 256 to 32 representable powers of two Mantissa bits 23 10 Reduced from ~7 decimal digits to ~3 Dynamic range ~2⁻¹²⁶ to ~2¹²⁷ ~2⁻¹⁴ to ~2¹⁵ Massive narrowing Mantissa precision ~7 decimal digits ~3 decimal digits Moderate reduction Bit total 32 16 Halved The “halving” is in the bit total. The dynamic range is reduced far more than half — by about ten orders of magnitude on each end. That asymmetry is what governs FP16’s behavior on real workloads. Why dynamic range is the limiting property For most training workloads, the values that arise during forward pass and backward pass span many orders of magnitude. Activations in a deep network can range from very large (in early layers or in unnormalized regions) to very small (in late layers or after compression operations). Gradients, particularly on parameters far from the loss, can be many orders of magnitude smaller than the activations they correspond to. Loss values themselves can shrink as training converges. FP32’s exponent budget covers this span comfortably. FP16’s does not. Two failure modes appear when an FP16 computation encounters values outside its representable range: Underflow. Small values (gradients, late-layer activations after compression) below 2⁻¹⁴ round to zero. Once a gradient becomes zero, the corresponding parameter does not update at all, and the training trajectory loses information that the FP32 version would have retained. Underflow is silent — there is no warning that a contribution was lost. Overflow. Large values (early-layer activations, unnormalized intermediate computations) above 2¹⁵ saturate to infinity. Once a value becomes infinity, downstream computations that reference it produce NaNs, and the training trajectory destabilizes catastrophically. The mantissa reduction is the secondary issue, not the primary one. A workload that survives the dynamic-range reduction can usually tolerate the mantissa reduction with negligible accuracy impact. A workload that fails on FP16 fails because of range, not precision. This is why a workload runs comfortably in BF16 (1+8+7 — same exponent as FP32) but breaks in FP16 (1+5+10 — different exponent): the dynamic range, not the bit total, is what changed. What mixed-precision schemes are doing Mixed-precision training pairs FP16 computation with FP32 accumulation and adds two specific mechanisms designed to recover the dynamic range FP16 cannot provide alone: FP32 master weights. The model parameters are stored at FP32. The forward and backward passes are computed at FP16 (using the FP32 weights cast to FP16 for the computation), but the parameter updates accumulate against the FP32 master copy. This protects the long-term accumulation from FP16’s mantissa-precision limit. FP32 accumulation in matrix-multiply. The intermediate accumulation in matrix-multiply operations runs at FP32 even when the input operands are FP16. The hardware tensor cores are designed for this pattern: FP16 inputs, FP32 accumulator, FP16 output. This protects the within-operation accumulation from precision loss. Loss scaling. Before backward pass, the loss is multiplied by a large scalar (typically a power of two). All gradients computed during backward pass are scaled by the same factor, which shifts them into the FP16 representable range and prevents underflow. After backward pass, the scaled gradients are unscaled (by dividing by the same scalar) before the parameter update is applied. This recovers the dynamic range FP16 lacks for very small gradients. A workload running under mixed precision is not running at FP16. It is running at a hybrid (FP16 + FP32 + loss-scaling) regime that has different numerical properties than either FP16 or FP32 alone, and the reported performance and accuracy of the workload are properties of the hybrid regime, not of FP16. This is why “FP16 result” without specifying the surrounding scheme under-specifies what the result represents. A pure-FP16 result (computation and storage entirely in FP16) is one regime; a mixed-precision result (FP16 compute, FP32 accumulation and master weights, with loss scaling) is a different regime; and the two can produce substantially different accuracy and stability profiles on the same model. What this means for FP16 benchmark interpretation A benchmark that reports performance “at FP16” must disclose what it actually measured. The interpretable forms are: Pure FP16: computation and storage at FP16 throughout, no loss scaling, no FP32 accumulation. Used in some inference scenarios where the workload tolerates the regime. Mixed precision (FP16 + FP32): FP16 compute and storage with FP32 accumulation in tensor-core operations and FP32 master weights for parameter updates, plus loss scaling for gradients. The standard training-side use of FP16. Mixed precision with framework-managed casts: the framework’s automatic mixed-precision (AMP) implementation, which makes per-operation decisions about which ops to run at which precision based on numerical-stability heuristics. These are different regimes with different performance and accuracy profiles. A benchmark report that names “FP16” without specifying which regime is asking the reader to assume one — usually mixed precision, but the assumption is invisible — and the result’s generalization to a different regime is the reader’s problem. Building on mixed-precision mechanics, the operational expression is that mixed precision works by exploiting the workload’s tolerance for reduced precision in some operations while protecting the operations that need full precision, and the benchmark methodology has to disclose which operations sat at which precision for the result to characterize the regime rather than just the silicon. The framing that helps IEEE-754 half-precision is a 1+5+10 bit allocation whose exponent reduction relative to FP32 narrows dynamic range far more than the bit-total halving suggests. Standalone FP16 is unstable for many workloads because of range, not because of mantissa precision. Mixed-precision schemes pair FP16 compute with FP32 accumulation, FP32 master weights, and loss scaling specifically to recover the range FP16 cannot provide alone. A “half-precision” performance result without disclosure of the surrounding scheme is under-specified. LynxBench AI treats the precision regime — the specific combination of computation precision, accumulation precision, master-weight precision, and loss-scaling configuration — as part of the AI Executor specification, because “FP16” alone is not a regime; the surrounding scheme is, and a benchmark that holds the regime explicit is the one that informs a deployment decision about precision selection.