Two different things called “agents” The word “agent” in AI now carries two distinct meanings that get conflated regularly. Agent-based modeling (ABM) refers to simulation systems where autonomous entities — agents — interact according to local rules, and emergent behavior at the system level arises from those local interactions. LLM-based agents are systems where a language model makes decisions, calls tools, and operates with some degree of autonomy to complete tasks. These are different tools for different problems. Conflating them leads to using the wrong approach. What agent-based modeling is Agent-based modeling is a computational method for simulating complex systems by modeling individual actors and their interactions. Each agent has: A state (its current properties) A set of behaviors (rules governing how it responds to conditions) An environment it perceives and acts within Interactions with other agents and the environment The power of ABM is emergence: system-level behavior that is not explicitly programmed but arises from agent interactions. Classic examples include traffic flow models, epidemiological spread models, and supply chain disruption simulations. ABM has existed since the 1990s. Tools like NetLogo, Mesa (Python), and AnyLogic are purpose-built for it. This is not a new LLM capability. When ABM is the right choice Use case ABM appropriate LLM agents appropriate Epidemiological spread modeling ✓ Thousands of heterogeneous agents ✗ Not suitable Supply chain disruption simulation ✓ Supplier-manufacturer-retailer interactions ✗ Not suitable Traffic flow and urban planning ✓ Vehicle behavior at scale ✗ Not suitable Customer behavior simulation ✓ Market dynamics with many agents ✓ For qualitative scenarios Warehouse robotics optimization ✓ Fleet coordination simulation ✓ For task planning AI task automation workflow ✗ Not appropriate ✓ Core use case ABM excels when: The system has many interacting entities (dozens to millions) Individual behavior rules are well-defined System-level behavior is what you need to study Stochastic variability between runs is informative Computational reproducibility is important ABM is not the same as using LLMs as agents LLM-based agents are not running population simulations. They are using language model reasoning to make decisions, use tools, and complete tasks. The terminology overlap creates confusion in projects where teams hear “multi-agent AI” and default to building LangChain workflows when the actual requirement is a population simulation. In our experience, this confusion most often appears in: Supply chain optimization projects (ABM for simulation, LLMs for analysis) Customer behavior modeling (ABM for scale, LLMs for individual-level qualitative scenarios) Logistics planning (ABM for fleet simulation, LLMs for exception handling) Combining ABM and LLM agents The more interesting pattern is combining both. Use ABM for large-scale simulation (thousands of entities with rule-based behavior) and LLM-based agents for the reasoning and adaptation layer — particularly for handling edge cases, exceptions, and policy updates that don’t fit rigid rules. A supply chain model might simulate ten thousand suppliers and retailers using ABM rules for normal operations, while an LLM agent handles anomaly detection, escalation decisions, and re-planning when disruptions occur. For understanding the broader landscape of generative AI model types and where agentic systems fit, What Types of Generative AI Models Exist Beyond LLMs provides the architectural context. When does agent-based modeling outperform traditional ML? Agent-based modeling (ABM) outperforms traditional ML in three scenarios: when system behaviour emerges from interactions between entities, when the entities have heterogeneous strategies that change over time, and when you need to evaluate interventions that have no historical precedent. Traditional ML excels at learning patterns from historical data. But historical data cannot tell you what happens under conditions that have never occurred. ABM can: you define the agents’ decision rules, run the simulation under novel conditions, and observe the emergent system behaviour. This is why ABM is used for pandemic modelling, market regulation analysis, and urban planning — scenarios where policy decisions create conditions that have no historical analogue. The weakness of ABM is calibration: the agents’ decision rules and parameters must be specified by domain experts or calibrated against observed data. If the rules are wrong, the simulation’s predictions are wrong. We address this by combining ABM with ML: use ML to learn agent decision rules from observed behavioural data, then use ABM to simulate system-level outcomes under novel conditions. This hybrid approach leverages ML’s ability to learn from data and ABM’s ability to extrapolate beyond observed conditions. For practical deployment, we use ABM when the client’s question is “what would happen if” rather than “what will happen.” Predictive questions with stable conditions suit ML. Counterfactual questions about interventions suit ABM. The distinction is important because applying the wrong methodology wastes project time — ML cannot answer counterfactual questions without strong causal assumptions, and ABM cannot match ML’s predictive accuracy on in-distribution forecasting tasks.