Risk-based validation is the current regulatory expectation We find that gAMP 5 — Good Automated Manufacturing Practice, Guide for Validation of Automated Systems — is the ISPE’s framework for validating computerised systems in pharmaceutical and related regulated industries. Published originally in 2008 and substantially revised in the Second Edition (2022), GAMP 5 provides a structured approach to determining what validation effort is appropriate for a given system based on its risk to product quality and patient safety. The core principle is proportionality: higher-risk systems require more thorough validation activities; lower-risk systems require proportionate but less intensive effort. This is not a relaxation of regulatory expectations. It is a more efficient allocation of validation resources that focuses engineering effort where it has the greatest impact on product quality. The GAMP 5 software categories Category Description Validation approach Examples 1 Infrastructure software Qualification, no detailed validation Operating systems, databases, network infrastructure 3 Non-configured products Verification of intended use Off-the-shelf instruments, standard firmware 4 Configured products Configuration verification and testing ERP, LIMS, MES configured for specific processes 5 Custom applications Full lifecycle validation Bespoke manufacturing control systems, custom analytics The original GAMP 5 assumed a clear boundary between configured (Category 4) and custom (Category 5) software. Machine learning systems challenge this boundary. An ML model trained on facility-specific data using a commercial framework (TensorFlow, PyTorch) is neither a configured product nor a fully custom application in the traditional sense. The Second Edition addresses this by introducing critical thinking as a validation principle — directing teams to assess what the system actually does rather than forcing it into a predetermined category. What the Second Edition changes The GAMP 5 Second Edition (2022) makes three significant shifts from the original framework: Critical thinking over prescriptive testing: Instead of mandating specific test types for each software category, the Second Edition directs validation professionals to assess risk and apply proportionate assurance activities. This aligns with FDA’s CSA guidance. Agile and iterative development: The original GAMP 5 assumed waterfall-style development lifecycles. The Second Edition acknowledges iterative development methodologies and provides guidance on how to maintain validation compliance in agile environments. AI/ML considerations: The Second Edition includes guidance on validating non-deterministic systems. It recognises that AI systems require continuous validation (ongoing performance monitoring) rather than one-time qualification, and that model changes constitute system changes requiring change control. The practical implications for classifying and validating AI/ML software under GAMP 5 extend to how organisations handle model retraining, data pipeline changes, and performance drift — each of which may trigger re-validation activities. Applying GAMP 5 to AI systems: a decision checklist Does the AI system affect GxP data or product quality? If yes, it falls under GAMP 5 scope. Determine the appropriate category based on system architecture and risk. Is the system deterministic or non-deterministic? Non-deterministic systems (ML models) require continuous validation — one-time IQ/OQ/PQ is insufficient. What is the system’s risk classification? High-risk systems (batch disposition, quality decisions) require thorough validation. Advisory systems (suggestions, reports) require proportionate effort. How will model updates be managed? Every model change (retraining, architecture change, data pipeline modification) requires change control with documented impact assessment. What performance metrics define “validated state”? Define acceptance criteria (accuracy, precision, recall, drift thresholds) before deployment, not after. GAMP 5’s risk-based approach is not optional — it is the framework that regulators expect pharmaceutical companies to apply. The alternative (uniform full validation regardless of risk) wastes resources and obscures the systems that actually need rigorous oversight. How does risk-based validation reduce effort without reducing compliance? The GAMP 5 risk-based approach reduces validation effort by focusing testing on high-risk functionality and accepting lighter-touch verification for low-risk functionality. The total number of tests decreases, but the coverage of risk-critical functions actually increases compared to flat, one-size-fits-all validation. The risk assessment evaluates each system function on two dimensions: the impact of failure (what happens if this function produces incorrect results?) and the probability of failure (how likely is incorrect behaviour given the technology and implementation?). Functions with high impact and high probability receive the most rigorous validation. Functions with low impact and low probability receive minimal validation — often just documented verification that the function exists and operates. For a typical LIMS implementation, this approach reduces the total test count by 40–60% compared to validating every function at the same level of rigour. The reduction comes from low-risk functions: report formatting, user interface preferences, notification routing. These functions affect user convenience but do not affect product quality or data integrity, so minimal testing is justified. The critical success factor is the risk assessment itself. An incorrectly performed risk assessment that classifies a high-risk function as low-risk creates a validation gap that regulators will identify. We use cross-functional risk assessment teams (IT, Quality, Operations, Subject Matter Experts) to ensure that all risk perspectives are represented. The risk assessment document becomes a key validation deliverable that regulators review during inspections to confirm that the validation scope was appropriately determined.