GxP is not one regulation — it is a family Teams from technology backgrounds entering pharmaceutical environments encounter “GxP” and treat it as a single compliance requirement. It is not. GxP is a collective term for Good Practice regulations — a family that includes GMP (Good Manufacturing Practice), GLP (Good Laboratory Practice), GCP (Good Clinical Practice), GDP (Good Distribution Practice), and others. Each applies to a different phase of the product lifecycle, and each imposes different requirements on software and AI systems operating within its scope. GxP is a family of regulations (GMP, GLP, GCP, GDP) — each applies different validation requirements to AI systems depending on their role in the product lifecycle. An AI system used in manufacturing quality control (GMP) faces different validation requirements than an AI system used in clinical trial data analysis (GCP) or laboratory test interpretation (GLP). The GxP family and what each means for AI Regulation Full name Applies to AI/ML implication GMP Good Manufacturing Practice Manufacturing, quality control, packaging, release testing AI in QC decisions requires full validation; changes need formal change control GLP Good Laboratory Practice Non-clinical laboratory studies, safety testing AI interpreting lab results must maintain data integrity and audit trail GCP Good Clinical Practice Clinical trials, patient data, trial conduct AI processing patient data needs privacy controls + data integrity validation GDP Good Distribution Practice Storage, transport, distribution of pharmaceuticals AI in cold-chain monitoring needs calibration traceability and alert validation GVP Good Pharmacovigilance Practice Post-market safety surveillance AI in adverse event detection needs sensitivity validation (false negatives are critical) Risk-based validation: not all software requires the same depth Software in GxP environments must be validated proportional to its impact on product quality — not all software requires the same depth of validation. This principle (codified in GAMP 5 and reinforced by the FDA’s Computer Software Assurance guidance) means that: Category 1 (infrastructure software): Operating systems, databases — validated by the vendor, minimal user validation required Category 3 (non-configured): Off-the-shelf software used as-is — vendor validation + user acceptance testing Category 4 (configured): Software configured for specific use — configuration validation + functional testing Category 5 (custom): Custom-developed software including AI/ML models — full lifecycle validation: requirements, design, coding, testing, deployment, maintenance Most AI/ML systems in pharma are Category 4 or 5, requiring substantial validation effort. But “substantial” does not mean “identical for all systems.” A predictive maintenance model that alerts a human (who makes the decision) requires less rigorous validation than a model that autonomously releases or rejects product batches. What validation actually looks like for AI systems For teams entering pharma from technology backgrounds, “validation” has a specific meaning that differs from software testing: User Requirements Specification (URS) — Defines what the system must do in operational terms Functional Specification (FS) — Defines how the system achieves the requirements Design Specification (DS) — Defines the technical implementation Installation Qualification (IQ) — Confirms the system is installed correctly Operational Qualification (OQ) — Confirms the system operates within defined parameters Performance Qualification (PQ) — Confirms the system performs as intended in production conditions For AI/ML systems, PQ is particularly challenging because model performance can drift over time as input data distributions shift. The GAMP 5 classification and validation approach details how to handle this lifecycle validation for adaptive systems. The practical consequence for AI teams Engineering teams accustomed to deploying models weekly and iterating based on A/B test results find pharma’s change control requirements constraining. Every model update requires documented justification, impact assessment, and re-validation proportional to the change’s risk. This is not bureaucracy for its own sake — it exists because a model error in manufacturing QC can affect patient safety. The accommodation is to design AI systems for pharma with change control in mind from the start: modular architectures where individual components can be re-validated independently, documented performance boundaries that trigger re-validation when breached, and explicit data drift monitoring that provides objective evidence for when re-validation is needed.