“Default” is a design decision that got habituated Most AI training, for most of the deep-learning era, has been done in IEEE-754 single-precision floating-point — FP32. The format is so consistently the default that it appears in tutorials and framework configurations as if it were a property of “training” itself rather than a specific design choice with specific assumptions. It is a design choice. The reasons FP32 became the default are concrete properties of its bit allocation, and the reasons to consider trading away from it are concrete properties of the workload’s tolerance for the trade. Understanding what FP32 actually represents — and what its structure provides — is the prerequisite for reasoning about whether a given workload can move to a lower-precision format and what it will give up in doing so. What is the IEEE-754 single-precision format? IEEE-754 single-precision allocates 32 bits as follows: 1 sign bit · 8 exponent bits · 23 mantissa bits The sign bit determines positive or negative. The 8 exponent bits encode a power of two between roughly 2⁻¹²⁶ and 2¹²⁷, which is the format’s dynamic range — how small or large a number it can represent at all. The 23 mantissa bits encode the significant digits of the value, which is the format’s mantissa precision — how finely it can distinguish two numbers of similar magnitude. These two properties — dynamic range and mantissa precision — are the axes along which floating-point formats differ from each other. Every choice of bit allocation is a trade between them, and every change to a different format is a trade in those terms. For FP32, the combined budget is large enough that ordinary numerical work — physical simulation, signal processing, scientific computation, and gradient-based optimization — does not encounter representation-induced instability under normal conditions. Numbers do not silently overflow or underflow in regions a workload is likely to traverse. Two values of similar magnitude can be reliably distinguished. Accumulation of many small contributions stays within the format’s representable range without saturation. That budget is the property the AI-training default was built around. Why FP32 became the AI training default Training a neural network with gradient descent is, at the numerical level, an accumulation problem. The optimizer updates each parameter by accumulating contributions across many samples, many batches, and many epochs. Three properties of this accumulation matter for format selection: Gradients can be small. Particularly in deep networks, gradients on parameters far from the loss can be many orders of magnitude smaller than activations. The format must represent values across this dynamic range without underflow. Updates accumulate. Many small contributions sum to the parameter update. The format must maintain enough mantissa precision that the small contributions are not lost in the rounding when added to a larger running sum. Stability matters across iterations. A representation-induced error in one iteration can amplify in the next. The format must be conservative enough that ordinary training trajectories do not encounter pathological numerical behavior. FP32’s 8-bit exponent and 23-bit mantissa happen to satisfy all three for the typical range of training workloads. The dynamic range comfortably covers the gradient/activation ratio in most networks; the mantissa precision is enough that small-update accumulation remains numerically meaningful. The format was not designed for AI training — it predates deep learning by decades — but its allocations match what training workloads need closely enough that it became the unthinking default. The corollary is that the default is not a moral commitment. It is a budget that happens to be conservative for most workloads. Different workloads have different tolerances, and a workload whose properties differ from the typical training profile may be able to use a smaller budget without losing what FP32 was protecting. What trading away from FP32 actually trades A precision choice below FP32 reduces some combination of the dynamic range or the mantissa precision. The two reductions have different consequences: Reducing dynamic range (e.g. moving from 8 exponent bits to 5, as in FP16) means small values can underflow to zero and large values can overflow to infinity in regions where FP32 would still represent them. For a workload whose gradients and activations span many orders of magnitude, this is the more dangerous reduction — the failure mode is silent loss of information, and the symptom is training divergence or accuracy collapse that doesn’t trace cleanly to a single computation. Reducing mantissa precision (e.g. moving from 23 mantissa bits to 7, as in BF16) means values close in magnitude become indistinguishable. For a workload whose accuracy depends on the relative ordering of activations rather than their fine numerical differences, this reduction is often well-tolerated. The format choices in modern AI hardware — BF16, FP16, FP8, FP4 — are different points on this trade-off space, each declaring which of the two properties the workload designer believes can be reduced. A workload that needs dynamic range tolerates BF16 (which keeps FP32’s 8-bit exponent) but not FP16 (which cuts to 5). A workload that needs mantissa precision but not dynamic range can use FP16 in a mixed-precision scheme that recovers range elsewhere. A workload tolerant on both axes can move to FP8 with appropriate accuracy validation. Precision as a design parameter makes the broader case; the conceptual content here is that precision is a design decision about which numerical properties a workload can afford to reduce, not a single-axis “lower = worse” trade. What a precision-aware benchmark must report A benchmark that reports performance at FP32 is reporting a result whose precision regime is the conservative default. The number is interpretable as a baseline. A benchmark that reports performance at a lower precision must report two coupled numbers: the throughput at the lower precision, and the accuracy of the workload’s output at that precision against the FP32 reference. Throughput-only reporting at lower precision is structurally incomplete because the throughput gain is conditional on the accuracy holding. A workload that runs 4× faster at FP8 with no measurable accuracy loss has gained 4×. A workload that runs 4× faster at FP8 with a 5% accuracy degradation may not have gained anything operationally — the lost accuracy may exceed the value of the speedup. The pair (throughput at precision X, accuracy at precision X relative to FP32) is therefore the minimum reporting unit for any precision-related performance claim. Reporting one without the other defers the trade-off to the reader without the data the reader needs to evaluate it. Format-by-format trade-off matrix The formats commonly used in modern AI hardware position themselves on the FP32 trade-off space as follows: Format Bits (sign+exp+mantissa) Dynamic range vs FP32 Mantissa precision vs FP32 Typical workload fit FP32 1+8+23 Reference Reference Conservative training default; numerical-stability baseline TF32 1+8+10 Same Reduced NVIDIA training-throughput format on Ampere+ BF16 1+8+7 Same as FP32 Substantially reduced Range-sensitive training; activations spanning many orders of magnitude FP16 1+5+10 Substantially reduced Moderately reduced Mixed-precision training and inference where range fits FP8 (E4M3) 1+4+3 Further reduced Further reduced Inference and selected training under careful calibration FP8 (E5M2) 1+5+2 Same as FP16 Substantially reduced Gradient or activation paths needing range over precision Each row trades a different combination of range and precision. Choosing among them is a workload-specific decision about which numerical property the workload can afford to reduce, not a generic lower = worse ranking. The framing that helps IEEE-754 single-precision is a 1+8+23 bit allocation that provides a wide dynamic range and 23 mantissa bits of precision. This budget happens to be conservative for the typical training workload, which is why FP32 became the AI training default. A precision below FP32 reduces dynamic range, mantissa precision, or both, and the choice is a workload-specific design decision — not a generic “smaller is worse” trade. Precision-related benchmark claims must report throughput and accuracy as a pair. LynxBench AI treats performance per precision and a declared accuracy criterion as a joint output of the AI Executor specification — because precision is a design parameter that produces a (throughput, accuracy) pair, not a single number. The question to ask of any FP32-to-lower-precision claim is whether the accuracy axis is named explicitly at the workload that matters, or held implicit in a way that hides the trade?