A category, not a list The proliferation of floating-point formats in AI hardware — FP32, TF32, BF16, FP16, FP8 (E4M3 and E5M2), FP4 — can read like a list of unrelated abbreviations to make sense of one at a time. They’re not unrelated. Every floating-point format is a fixed allocation of bits between three fields (sign, exponent, mantissa), and the choice of allocation encodes a specific declaration about which numerical property a workload values most. Read as a category, the formats become a structured space rather than an alphabet soup, and the choice between them becomes a design question rather than a vendor mystery. This piece situates the modern AI floating-point formats inside a single explanatory frame: what changes from format to format, what each change buys, and what each change costs. What do the three floating-point fields each control? Every IEEE-754-style floating-point format has three fields: Sign bit (1 bit). Always one bit. Not where the differences live. Exponent bits. Encode a power of two that scales the mantissa. The exponent budget determines the format’s dynamic range — the ratio between the largest and smallest non-zero values it can represent. More exponent bits = wider range. Mantissa bits. Encode the significant digits of the value. The mantissa budget determines the format’s mantissa precision — how finely two numbers of similar magnitude can be distinguished. More mantissa bits = finer resolution. For a fixed total bit budget, exponent and mantissa trade off against each other. Spending more bits on exponent buys range at the cost of resolution; spending more bits on mantissa buys resolution at the cost of range. Every format choice in modern AI hardware is a different point on this trade. The modern AI formats compared Format Bit allocation (sign + exp + mantissa) Total bits Dynamic range Mantissa precision Designed for FP32 1 + 8 + 23 32 Wide High Conservative default for training TF32 1 + 8 + 10 19 (operates in 32-bit container) FP32-equivalent Reduced Training matrix-multiply on tensor cores BF16 1 + 8 + 7 16 FP32-equivalent Substantially reduced Training without losing FP32 dynamic range FP16 1 + 5 + 10 16 Substantially reduced vs FP32 Reduced Mixed-precision schemes (paired with FP32 accumulation) FP8 (E4M3) 1 + 4 + 3 8 Reduced Heavily reduced Inference and forward-pass training; better mantissa for activations FP8 (E5M2) 1 + 5 + 2 8 Less-reduced (FP16-like) Heavily reduced Gradients and backward-pass tensors needing more range FP4 Various (1+2+1, 1+3+0, etc.) 4 Heavily reduced Heavily reduced Aggressive inference quantization with calibration Two patterns are visible across the table. The BF16-vs-FP16 distinction matters more than its identical 16-bit total suggests. BF16 keeps FP32’s 8-bit exponent and gives up mantissa precision; FP16 keeps more mantissa bits but cuts exponent to 5. A workload bottlenecked by dynamic range fails on FP16 in places where BF16 succeeds. A workload bottlenecked by mantissa precision degrades on BF16 in places where FP16 holds. The choice between two 16-bit formats is therefore a workload property, not a hardware property. The two FP8 variants (E4M3 and E5M2) exist precisely because no single 8-bit allocation works for all parts of a training workload. Forward-pass activations have one numerical profile; gradient tensors have a different one. E4M3 favors mantissa for activations; E5M2 favors range for gradients. A workload that uses FP8 typically uses both variants in different roles, not one variant uniformly. How a workload’s properties select the format The format-selection question is structurally: “which of these two losses can the workload absorb — reduced dynamic range, reduced mantissa precision, or both?” Three properties of the workload determine the answer: Magnitude span of values being represented. Activations and gradients spanning many orders of magnitude need exponent budget (BF16 or FP32-tier formats). Activations bounded to a narrow range survive on smaller exponent budgets (FP16, FP8 E4M3). Sensitivity to small numerical differences. Workloads where the relative ordering of activations matters more than their fine values tolerate mantissa reduction. Workloads where small differences carry information (some scientific or signal applications) do not. Position in the training loop. Forward-pass tensors typically have different numerical profiles than backward-pass tensors. Mixed-precision schemes route different stages of a single computation through different formats accordingly. The answer is workload-specific. Two models that look architecturally similar can have different optimal precision regimes because their internal numerical distributions differ. The framework’s mixed-precision recipes are starting points; the actual deployment regime requires accuracy validation against the workload’s correctness criterion. Why precision benchmarks must report accuracy alongside throughput A benchmark that reports throughput at a precision lower than FP32 without reporting the accuracy that the lower precision produced is reporting half the trade-off. The other half — the accuracy retained — is the variable that determines whether the throughput gain is operationally usable or operationally meaningless. The patterns the missing-accuracy reporting hides: A workload that runs 2× faster at BF16 with no measurable accuracy loss has gained 2×. The same workload at FP8 with 8% accuracy loss has not necessarily gained anything; whether the gain is real depends on whether the application can absorb the 8% loss. A vendor benchmark that reports peak throughput at FP4 without naming the model, the calibration scheme, or the accuracy retained is reporting a hardware capability, not a deployment-relevant performance number. A comparison between two accelerators at “8-bit precision” is incomplete if the two devices used different FP8 variants (E4M3 vs E5M2), different calibration schemes, or different mixed-precision recipes. The throughput numbers are not comparing the same numerical regime. A precision-aware benchmark protocol therefore reports throughput per precision and accuracy per precision, with the calibration or quantization scheme disclosed, so the reader can read both halves of the trade-off and decide whether the precision regime is applicable to their workload. FP8, FP16, and BF16 as different operating regimes makes the broader case; the operational expression here is that each precision is a regime with its own trade profile, and a benchmark must measure both the performance the regime offers and the accuracy the regime preserves for the result to characterize the regime rather than just the silicon. The framing that helps A floating-point format is a bit allocation between sign, exponent, and mantissa, and modern AI formats sit at different points on the (range, precision) trade-off. The choice between formats is a workload property — which of dynamic range or mantissa precision the workload can afford to reduce — not a hardware property. Benchmarks that report throughput at a given precision must also report accuracy at that precision against an FP32 reference, because precision is a (throughput, accuracy) pair, not a single number. LynxBench AI treats per-precision performance and a declared accuracy criterion as paired requirements of the AI Executor specification — because the precision regime that actually executes is part of the executor, and a methodology that holds throughput and accuracy together is the one that can inform a precision-selection decision. The question to put to any precision-related performance claim is whether throughput and accuracy are reported together against the same workload, or whether one axis is silently held constant?