CSV is a lifecycle, not a document Computer System Validation (CSV) in pharmaceutical manufacturing is the documented process of ensuring that a computerised system consistently performs its intended functions in compliance with GxP requirements. It is not a single test, a report, or a certification. It is a lifecycle that begins at system requirements, continues through design and testing, and extends through the system’s operational life until decommissioning. The lifecycle produces a validation package — a traceable set of documents linking user requirements to design specifications, design specifications to test protocols, and test protocols to execution evidence. When an inspector reviews a validated system, they follow this traceability chain to verify that every stated requirement has a corresponding test and every test has documented evidence of execution and acceptance. The CSV lifecycle in practice Phase Deliverables Purpose Planning Validation Plan, System Description, Risk Assessment Define scope, approach, acceptance criteria Specification URS, FS, DS, CS Document what the system must do and how Verification IQ, OQ, PQ protocols and execution records Prove the system works as specified Release Validation Summary Report, SOPs, training records Authorize the system for production use Operation Change control, periodic reviews, incident management Maintain the validated state Retirement Decommissioning plan, data migration, archive Remove the system while preserving data URS = User Requirement Specification; FS = Functional Specification; DS = Design Specification; CS = Configuration Specification. In our experience, a common failure mode is treating CSV as a one-time exercise completed at deployment. In practice, the most consequential validation activities occur during the operational phase — when system changes, infrastructure updates, and data migrations create opportunities for the validated state to degrade. Every change to a validated system requires a documented impact assessment to determine whether revalidation is needed. CSA: the risk-based alternative The FDA’s Computer Software Assurance (CSA) guidance, finalised in 2022, offers a risk-based alternative to traditional CSV. CSA does not eliminate validation. It directs validation effort toward the activities that matter most — testing and assurance of the highest-risk functionality — while reducing documentation burden for lower-risk components. Under CSA, the determining factor is whether a software function has direct impact on product quality. Functions with direct impact (automated batch release decisions, process control setpoints, quality data calculations) require thorough assurance activities including scripted testing. Functions with indirect or no impact on product quality may be assured through unscripted testing, vendor documentation review, or operational checks. For engineering teams, CSA changes the validation conversation from “document everything equally” to “identify what matters and test it thoroughly.” The decision between full CSV and CSA is a risk-based choice that depends on system classification and quality impact. What changes for AI and ML systems AI/ML systems introduce two challenges that neither traditional CSV nor basic CSA fully address: Non-deterministic outputs: ML models produce different results as training data changes. Validation cannot assume fixed behaviour. Continuous learning: Models that retrain on production data change their validated state without a formal change control trigger. The emerging best practice is continuous validation — ongoing monitoring of model performance (accuracy, precision, recall, drift metrics) against predetermined acceptance criteria, with automated alerts and triggered revalidation when thresholds are breached. This is consistent with CSA’s risk-based philosophy while addressing the specific characteristics of learning systems. What deliverables does a CSV project produce? A complete Computer System Validation project produces a set of documentation that collectively demonstrates the system is fit for its intended use and compliant with regulatory requirements. The Validation Plan defines the validation scope, strategy, roles, responsibilities, and acceptance criteria. It is approved before validation activities begin and serves as the governing document for the project. The User Requirements Specification (URS) captures what the system must do from the user’s perspective. Functional Specifications (FS) describe how the system will meet the requirements. Design Specifications (DS) describe the technical implementation. These three documents create a traceability chain from user needs through design. Installation Qualification (IQ) verifies that the system is installed correctly per the design specifications. Operational Qualification (OQ) verifies that the system functions correctly under anticipated operating conditions. Performance Qualification (PQ) verifies that the system performs consistently under real-world conditions with actual data. The Traceability Matrix links every user requirement to its design specification, test protocol, and test result — providing a single-document view of validation completeness. If any requirement lacks a corresponding test, the matrix reveals the gap. We produce these deliverables using templates that our quality team maintains, customised for each project. The template approach ensures consistency across projects and compliance with our quality system, while reducing the documentation effort from what it would be if each project created documents from scratch. A typical validation project for a medium-complexity system (50–100 requirements) requires 4–8 weeks of validation effort alongside development.