A GPU comparison is a function of its inputs, not a property of the GPUs A GPU benchmark comparison is presented as if it were a property of the GPUs being compared: GPU A is “faster” than GPU B by some factor. The presentation is misleading. A GPU comparison is a function of the workload chosen, the precision regime evaluated, the software stack used on each candidate, and the saturation conditions held constant. Change any of these inputs and the comparison can reorder. Two published comparisons of the same two GPUs that differ on these inputs are reporting different numbers about different things. This becomes visible most sharply in cross-vendor comparison, where the structural asymmetry between the two stacks makes the methodology dependence impossible to hide. Within-vendor comparison can paper over methodology assumptions because the stacks are similar enough that defaults move similarly across candidates. Cross-vendor comparison cannot. Why two same-GPU benchmark comparisons can rank candidates differently The output of a GPU benchmark comparison is not “the relative performance of two GPUs.” It is the relative performance of two specific (GPU, software stack, workload, precision, configuration) tuples, evaluated under specific saturation conditions, with whatever optimization effort the benchmark applied to each candidate. This composition matters because each component can shift the result substantially: The workload determines which performance characteristic matters. A memory-bandwidth-bound workload reorders comparisons differently than a compute-bound one. A workload that fits in cache reorders differently than one that does not. The precision regime determines which arithmetic units the workload exercises. A comparison at FP32 measures different hardware than a comparison at FP8 or INT4, even on the same chips. The software stack determines which kernels run. Different libraries, compiler versions, and runtime configurations produce different effective throughput on identical hardware. The saturation conditions determine whether the result reflects sustained or burst behavior. A short benchmark at modest load measures peak; a long benchmark at heavy load measures what the deployment will actually see. A comparison report that says “GPU A is X% faster than GPU B” without disclosing the workload, precision, software stack, configuration, and saturation conditions is reporting a number whose generalization to the reader’s deployment cannot be assessed. The number is not wrong — it is under-specified. Why cross-vendor comparison is structurally harder Cross-vendor GPU comparison faces a structural problem that within-vendor comparison does not: the software stacks are not identical, the precision-format support is not identical, and the available kernel optimizations are not identical. These asymmetries make “fair comparison” a methodological commitment rather than an automatic property of running the same benchmark on both candidates. Within a single vendor’s product line, a benchmark suite typically uses the same kernel library, the same compiler, and the same set of precision formats across candidates. Differences in measured performance can plausibly be attributed to differences in the hardware, because the software variables are held closer to constant. Cross-vendor, none of those variables are constant. CUDA-tuned kernels do not exist for non-NVIDIA hardware, ROCm-tuned kernels do not exist for non-AMD hardware, and oneAPI-tuned kernels do not exist for hardware outside the Intel ecosystem. A cross-vendor comparison that uses each vendor’s optimal stack is not measuring a hardware difference; it is measuring a hardware-and-stack difference. This is not a defect of cross-vendor benchmarking. It is the unavoidable consequence of the AI Executor being the unit of performance — the combined hardware-and-software system, not the chip alone. Comparing the methodological layers a GPU comparison embeds Methodological layer What it controls How it shifts the comparison Workload selection Which performance characteristic dominates Memory-bound vs compute-bound workloads can produce opposite orderings Precision regime Which arithmetic units are exercised FP8/INT4 favor accelerators with strong low-precision support; FP32 narrows the gap Software stack per candidate Which kernels run on each GPU Vendor-optimal stacks vs lowest-common-denominator stacks produce different results Optimization effort How much per-candidate tuning was applied Unbounded optimization is favorable to the better-resourced side; absent optimization is favorable to the lower-overhead default Saturation conditions Whether sustained or burst behavior is measured Short tests favor higher peak; long tests under load favor steadier sustained throughput Reporting (mean vs percentile) What the headline number summarizes Mean compresses tail behavior; percentiles expose latency variance that the deployment will see A comparison that does not name where it sits on each row is producing a result whose interpretation requires guessing what the row values were. What “fair” means in cross-vendor comparison Fair cross-vendor GPU comparison is not the absence of optimization on either side. The absence of optimization produces a result dominated by which side’s default configuration happens to be closer to optimal — a property of the defaults, not of the hardware. Fair comparison is bounded, declared optimization on both sides: a documented amount of per-candidate tuning effort, with the optimizations themselves disclosed, applied symmetrically to both sides of the comparison. This is harder than it sounds. Bounded optimization requires a methodological decision about what counts as in-bounds tuning (kernel selection? configuration parameters? quantization scheme? workload partitioning?). The decision is not neutral — different bounds favor different sides of the comparison. The methodological honesty is in declaring the bounds and the rationale, not in pretending that any single bound is the obviously correct one. The comparison that emerges from this discipline is one that supports a decision: it tells the reader what configurations of which hardware-and-software combinations produce what performance under what conditions, and it allows the reader to determine which configuration best matches their own deployment shape. A comparison that omits this discipline produces a number that is reproducible only on the comparison author’s setup. What this means for reading GPU benchmark comparisons A reader of a GPU benchmark comparison has one practical question to ask of every published number: under what conditions was this measured, and how do those conditions relate to my deployment? If the workload, precision, software stack, optimization bounds, and saturation conditions are not disclosed, the question cannot be answered, and the number is informative about the comparison author’s setup but not about the reader’s deployment. The general principle that methodology determines benchmark comparability applies in concentrated form to GPU comparison: cross-vendor comparison amplifies every methodological gap because the stacks differ on every axis. The discipline that makes such comparisons useful is the same discipline that makes any benchmark useful — bounded, disclosed, workload-anchored — applied with the rigor that the cross-vendor case requires. The framing that helps A GPU benchmark comparison is a methodologically-conditioned measurement of a hardware-and-software system, not a measurement of hardware in isolation. Cross-vendor comparison is structurally harder than within-vendor comparison because the stacks differ on every axis a comparison’s result depends on. Fair cross-vendor comparison requires bounded, declared optimization on both sides — and the disclosure of that bound is what makes the comparison auditable. LynxBench AI treats GPU comparison as a per-workload, per-precision, per-stack, bounded-optimization measurement — with the bounds and the stacks disclosed on each side — because that disclosure is what determines whether the comparison transfers from the measurement context to the deployment context the reader cares about.