“Same chip class” is not the right starting frame A team comparing AI hardware options often starts with the assumption that an accelerator is an accelerator: that the GPU or specialized device that performs well on training will perform proportionally well on inference, and that the metric to optimize is “TFLOPS for AI.” This frame conceals the architectural divergence that has produced a distinct category of inference accelerators — devices designed around the access pattern, precision regime, and latency/throughput trade-off that inference workloads exhibit, not the ones training workloads exhibit. Inference accelerators exist as a distinct category because the workload they are built for is fundamentally different from training, and the design decisions that optimize for one degrade performance or cost-efficiency on the other. Treating them interchangeably with training hardware in benchmarks produces results that are uninformative for both decisions. What does an inference accelerator optimize for? Inference is the forward pass of a trained model, applied to new inputs at deployment time. The workload has properties that training does not: Single-batch or small-batch latency matters. Training tolerates large batches (the batch is the unit of optimization). Inference often must respond per request, which means small effective batch sizes and per-request latency dominance. Memory traffic dominates compute. For most modern model architectures at inference time, the workload is memory-bound: the time to fetch weights and activations from memory dominates the time to perform the arithmetic on them. Training is more compute-bound because the optimizer step and the backward pass amortize memory access across more arithmetic operations. Lower precision is acceptable and often required. Training generally requires higher-precision accumulation to keep gradients well-behaved (FP32, BF16, mixed-precision schemes). Inference can frequently use INT8, FP8, or other lower-precision formats without accuracy loss, which changes the hardware features the accelerator must provide. Energy per inference is the cost metric. Training cost is dominated by run-to-completion energy on a fixed dataset. Inference cost is dominated by per-query energy at deployment scale, which makes energy-per-operation a primary design constraint. Model weights are static at deployment. Training updates weights every step. Inference loads weights once and uses them for many requests, which permits architectural choices (weight stationarity, on-chip storage) that don’t apply to training. These properties produce different design pressures on the silicon, the memory subsystem, and the software stack. Architectural differences between training and inference hardware Dimension Training-oriented hardware Inference-oriented hardware Precision focus FP32, BF16, mixed-precision; high-bit accumulation INT8, FP8, FP16; sometimes INT4 / sub-byte formats Memory subsystem High bandwidth (HBM common); large capacity to hold optimizer states + gradients + activations Bandwidth-optimized but often smaller capacity; sometimes on-chip weight storage Interconnect High-bandwidth multi-device interconnect (training scales to many accelerators) Per-device or limited interconnect; inference often runs on a single device per replica Compute precision mix Tensor cores supporting full set of training-relevant precisions Tensor cores or specialized matrix engines specialized for inference precisions Power envelope Higher absolute power; performance-per-watt secondary to throughput Often optimized for performance-per-watt at the deployment power envelope Software stack focus Framework-side training APIs, distributed training primitives Inference runtimes (TensorRT, OpenVINO, ONNX Runtime, vendor-specific); model compilation pipelines The columns are not strictly disjoint — a high-end training accelerator can perform inference, and a specialized inference accelerator can sometimes train smaller models — but the design centers are different, and the benchmarks that exercise each produce different rankings of the same set of devices. Why inference accelerators benchmark differently The fact that training and inference are different workloads means that the benchmarks that meaningfully evaluate them are different benchmarks. A training benchmark exercises the optimizer step, gradient accumulation, distributed all-reduce, and high-precision matrix multiplication on large batches. An inference benchmark exercises forward-pass latency, low-precision matrix multiplication on small batches, KV-cache management for autoregressive models, request-level scheduling, and energy-per-query. These are different code paths that use different hardware features. Reporting a single “AI performance” number that purports to characterize a device for both workloads is methodologically uninformative. The device may be excellent for training and indifferent for inference (because its low-precision matrix engines or per-request latency profile are not the design center), or excellent for inference and inadequate for training (because its memory capacity or interconnect cannot hold or coordinate the training state). A workload-faithful benchmark for inference reports per-precision throughput at realistic batch sizes for the deployment scenario, per-request latency at target throughput, and energy-per-inference at the operating point — and discloses the inference runtime, model compilation toolchain, and quantization configuration that produced those numbers. A training-style benchmark transposed onto an inference accelerator produces a number that does not predict deployment behavior. What this means for evaluating an inference accelerator A team evaluating an inference accelerator should treat it as a distinct device category and apply benchmarks designed for the inference workload — not training-style throughput numbers extrapolated to inference. The dimensions to capture in the executor specification include the inference runtime version, the model compilation toolchain (and its version), the precision regime the model was compiled to, the batch size and sequence-length profile of the benchmark workload, and the latency/throughput operating point being measured. A benchmark that captures these dimensions characterizes the inference executor. A benchmark that omits them characterizes a vendor-supplied number whose generalization to the team’s actual deployment is unknowable. Training and inference as fundamentally different workloads makes the broader case; the practical content here is that the workload distinction is what produces the distinct hardware category, and the distinct hardware category requires distinct benchmark methodology. The framing that helps Inference accelerators are a distinct category because inference is a distinct workload — different precision regime, different memory access pattern, different latency profile, different cost metric. Training-style benchmarks misrepresent inference accelerators. Inference benchmarks must use inference workloads and disclose the inference-runtime + compilation-toolchain + precision-regime stack that actually executed. LynxBench AI treats the inference runtime, model compilation toolchain, precision regime, and operating-point batch/latency profile as part of the AI Executor specification — alongside the accelerator hardware — because the inference workload’s distinct architecture demands a distinct benchmark methodology that the executor specification has to support.