A category designed for a different problem Capacity-planning tools have a long history in IT operations. They were built to answer a specific class of question: given an observed pattern of resource consumption (CPU, memory, disk, network) on a fleet of servers running a stable workload, how much resource will the same fleet need next quarter, and when should new capacity be provisioned? The tools that answer this question — APM platforms with capacity-planning modules, dedicated capacity-planning suites, infrastructure-as-code resource projection tools — are good at it. The problem is that AI infrastructure planning is not the question they were built to answer. AI workloads change resource profile across regimes in ways that the historical-projection model these tools rely on cannot represent. The tools are useful for the parts of AI infrastructure that look like general IT (host fleets, networking, storage). They are structurally inadequate for the part that determines AI capacity: how the AI Executor’s saturation point shifts as the workload mix or request volume changes. Why general capacity-planning tools mismatch AI workloads The standard capacity-planning approach is observation-and-extrapolation. The tool ingests time-series resource consumption from the deployed fleet, fits a model (linear, seasonal, ML-based) to the historical pattern, and projects forward. The tool can answer: “at the current growth rate, when will CPU utilization on this fleet exceed 80%?” or “what aggregate memory will this service need in six months?” These projections work well when: The workload’s resource profile is stable — the same operations consume the same resources today as a year ago. The growth pattern is smooth — no large discrete shifts in demand or behavior. The bottleneck resource is predictable — the resource that fills first today will be the one that fills first tomorrow. The performance ceiling is a property of the resource — at 100% CPU, the workload is at its limit; further demand requires more CPU. For database servers, web tiers, and storage backends running mature stable workloads, all four assumptions usually hold, and the tools deliver useful projections. Why these assumptions break for AI workloads For AI inference and training workloads, the assumptions that ground general capacity-planning tools fail in characteristic ways: Resource profile shifts across phases. Training and inference exercise the accelerator very differently. Adding training capacity for a workload mix that’s about to shift toward inference produces over-provisioned training capacity and under-provisioned inference capacity, even though the aggregate “GPU utilization” projection looked correct. Resource profile shifts across model versions. A larger model, a different precision regime, a switch from dense to MoE architecture, or a change in batch policy can move the workload’s bottleneck resource entirely. A fleet sized to memory bandwidth for one model can be compute-bound for the next, and a tool projecting from historical resource profiles will not see the shift coming. Discrete bottleneck transitions. Unlike a CPU that goes from 80% to 90% utilization smoothly, an AI workload often has a sharp saturation point where adding load produces disproportionate latency growth and minimal throughput gain. The tool’s smooth-projection model does not represent this knee. Aggregate utilization is misleading. “GPU utilization at 60%” reported by general infrastructure tools typically means the GPU’s compute units were active 60% of wall-clock time. It does not mean the workload achieved 60% of the AI Executor’s effective throughput, because memory-bound workloads can show high “utilization” while delivering far below the throughput a compute-bound workload at the same utilization would. A tool projecting from utilization metrics treats these as equivalent and produces wrong forecasts for both. Performance ceiling is not a single resource. AI workload performance depends on the (accelerator + driver + framework + runtime + precision + workload) executor, and the bottleneck shifts within the executor as conditions change. A tool that models capacity as “more of resource X” cannot represent a saturation that’s actually a property of the executor configuration. Where general tools still help — and where they don’t A pragmatic split is necessary. General capacity-planning tools remain useful for the general-IT components of AI infrastructure; they need to be supplemented with workload-anchored projection for the AI-specific part. Capacity question General tools What’s needed for AI Host CPU and memory growth Useful Sufficient Network and storage capacity Useful Sufficient Aggregate accelerator-hour budget Partially useful (for cost forecasting) Sufficient as a finance input Inference fleet sizing for SLO Misleading on its own Needs workload-anchored saturation measurement Training queue capacity Misleading on its own Needs per-job resource profile + scheduling model Power / thermal capacity for AI deployment Inadequate (TDP-based projections fail) Needs measured per-workload draw Bottleneck shift across model generations Cannot represent Needs executor-aware re-measurement The right hand column is where the gap sits. Filling it requires measurement against the AI Executor under the workload, not projection from historical aggregate utilization. The two approaches are complementary, not competing — but only if the team running the projection knows which question each tool can answer. What workload-anchored projection looks like The structure of a workload-anchored AI capacity projection is different from time-series extrapolation. The inputs are: Per-workload saturation curves — measured throughput-vs-latency curves for each (model, precision, batch policy) configuration the deployment will run, on the production AI Executor. Workload-mix forecast — projected fraction of fleet hours allocated to each workload over the planning horizon. Per-workload demand forecast — projected request volume or training run count for each workload. SLO constraints — the latency-budget envelope each inference workload must stay within. Headroom policy — the fraction of saturation capacity reserved as buffer for demand spikes and partial-failure tolerance. The output is a per-workload capacity requirement (number of AI Executor instances of each type) under the SLO and headroom policy, summed and aggregated to a fleet-level provisioning plan. This is a different shape of computation than the time-series extrapolation general tools perform — and it is the computation that produces fleet sizing that survives a workload-mix shift. Building on steady-state performance and capacity planning, the operational expression is that capacity for AI is a function of saturation behavior under realistic load, not a function of aggregate utilization, and the tools that measure and project the right thing are different from the tools that measure and project the wrong thing. The framing that helps General capacity-planning tools answer a question about historical resource projection that AI infrastructure planning is not asking. They remain useful for the IT components of AI infrastructure and for cost forecasting. They are not adequate for AI-specific capacity questions — fleet sizing under SLO, workload-mix shift, bottleneck transitions across model generations — because the assumptions they rely on (stable resource profiles, smooth growth, predictable bottlenecks, single-resource ceilings) do not hold for AI workloads. Workload-anchored projection against measured saturation curves is the missing piece. LynxBench AI is built on the principle that workload-anchored capacity projection requires per-workload saturation measurements — throughput-vs-latency curves taken on the production AI Executor under the production workload — because that saturation behavior is the input AI capacity planning needs and that general projection tools cannot synthesize from utilization metrics alone.