Why quality control now defines digital pathology
Whole‑slide imaging (WSI) has moved from pilots to daily practice. Labs scan at scale, share cases, and run algorithms on multi‑gigapixel images. That scale brings risk. Artifacts, colour shifts, focus issues, and cohort drift can sink accuracy and delay reports.
Patients need confidence that digital reads match glass. So do pathologists. The College of American Pathologists (CAP) set clear expectations: labs must validate WSI for diagnostic use and show equivalence with light microscopy before routine reporting (College of American Pathologists, 2022).
CAP notes that the US Food and Drug Administration has approved select WSI systems for primary diagnosis, which raises the urgency for robust validation in real settings (Evans et al., 2022).
CAP’s guideline update provides a concrete bar. It reaffirms using a validation set of at least 60 cases, measuring intra‑observer concordance, and applying a washout of about two weeks between glass and digital reads. Labs should worry if concordance drops below 95%, and they must reconcile all discordances to protect patient safety (Evans et al., 2022).
Read more: Validation‑Ready AI for GxP Operations in Pharma
The real sources of error: artifacts and batch effects
The biggest threats are often mundane. Tissue folds, chatter, out‑of‑focus regions, pen marks, coverslip bubbles, scanner streaks, JPEG artefacts, stain variation, and background debris can all degrade interpretability. Reviews of WSI quality highlight how these effects accumulate along the pipeline—from grossing and staining to scanning and compression—and argue for computational QC embedded in routine flow, not just at validation time (Brixtel et al., 2022).
Practical tools exist. HistoQC is a well‑known open‑source QC application that locates artefacts, surfaces cohort outliers, and provides an interactive view for technicians and scientists (Janowczyk, 2019). Its authors report suitability for computational analysis on more than 95% of reviewed slides from a large dataset when QC runs ahead of analysis. Commercial offerings such as AiosynQC likewise target blur, pen, and tissue artefacts and position QC as a first gate before diagnostic AI (Aiosyn, 2024).
Validation, but also day‑to‑day assurance
Validation gets a lab to “go‑live.” Quality control keeps it there. CAP’s update stresses that validation should reflect intended use and environment. That means your cases, your stains, your scanners, and your pathologists.
It also means you monitor ongoing performance, re‑validate when workflows change, and keep a record that links evidence to decisions (College of American Pathologists, 2022).
A pragmatic operating model looks like this:
-
At ingest, automated QC flags focus, pen, tissue coverage, and colour issues; technicians triage in minutes and rescan or recut when needed (Brixtel et al., 2022).
-
Before AI analysis, a second check confirms that algorithm‑sensitive artefacts sit below defined thresholds. If not, the pipeline routes the slide to manual review or rescans (Aiosyn, 2024).
-
For clinical reporting, the lab tracks concordance trends and reacts when drift appears—exactly the kind of programme thinking CAP describes (Evans et al., 2022).
Read more: Edge Imaging for Reliable Cell and Gene Therapy
Explainability: the difference between a useful AI and a risky one
Artificial intelligence (AI) can triage fields of view, pre‑annotate regions, and support scoring. Yet a heatmap without context breeds doubt. What helps?
-
Human‑readable cues: outline folds, highlight blur bands, mark pen regions—explanations that align with how pathologists think about image quality (Brixtel et al., 2022).
-
Cohort outlier panels: show when a stain deviates from historical ranges; HistoQC and similar tools make this visible (Janowczyk, 2019).
-
Linked evidence: one click from a flag to the underlying metrics, scan settings, and QC thresholds. This supports reconciliation when a discordance appears, a point the CAP guideline underscores (Evans et al., 2022).
-
Explainability is not only for AI outputs. QC itself should explain why a slide failed a gate and how to fix it (rescan, restain, adjust scanner focus map). That closes the loop and avoids unnecessary rework.
Data governance and clinical safety
Pathology data flows across lab systems, research drives, and cloud stores. A QC‑first posture reduces downstream waste, but labs also need traceability: who changed what, when, and why. Good practice is to bind QC results to each WSI (as JSON + PDF), store checksums, and capture scanner metadata and versions. CAP expects equivalence to light microscopy for intended use, plus a file of reconciled discordances—governance links those pieces so audits run smoothly (College of American Pathologists, 2022).
Read more: AI in Genetic Variant Interpretation: From Data to Meaning
Designing the QC stack: what to automate, what to keep manual
Based on the literature and guidelines, a balanced stack typically includes:
-
Static checks at ingest (focus, pen, tissue, background). Tools like HistoQC offer these off the shelf and provide a fast, transparent UI for technologists (Janowczyk, 2019).
-
Dynamic, cohort‑aware checks (stain statistics, colour deconvolution, scanner profile shifts) to catch batch effects that a single‑slide test misses (Brixtel et al., 2022).
-
Model‑compatibility checks for any downstream AI. Many vendors advise shielding models from known artefacts and rejecting slides when risk thresholds are exceeded (Aiosyn, 2024).
-
Manual sign‑off for edge cases. CAP’s framework centres patient safety; when in doubt, a person decides and records the reason (Evans et al., 2022).
Read more: AI Visual Inspection for Sterile Injectables
Metrics that matter to pathologists and QA
Pick measures that clinicians feel and QA can audit:
-
WSI–glass concordance (%) on periodic re‑reads of validation‑like sets; target ≥95% to match CAP expectations (Evans et al., 2022).
-
QC fail rate by cause (focus, stain, pen, tissue coverage) and time‑to‑resolution. Reviews show that a slide‑level QC plan reduces turnaround when technicians can see and fix the cause immediately (Brixtel et al., 2022).
-
Cohort drift indicators (colour statistics, scanner profile shifts) with thresholds that trigger a rescan batch or maintenance (Brixtel et al., 2022).
-
AI abstain rate and pathologist acceptance of AI suggestions on cases where QC passes—helps calibrate trust and surfaces where explanations need work (Aiosyn, 2024).
A step‑by‑step adoption plan
Start with a pilot on one specimen class (e.g., H&E surgical resections) and one scanner line. Build a 60‑case validation set and measure concordance with a two‑week washout, as CAP advises (Evans et al., 2022). Introduce automated QC at ingest and measure rescans avoided, turnaround, and pathologist satisfaction (Brixtel et al., 2022).
Add model‑compatibility checks for any diagnostic AI, and compare pathologist acceptance before and after explainability improvements (Aiosyn, 2024). Codify governance: store QC artefacts with each WSI; keep a living log of discordances and resolutions to match CAP’s reconciliation intent (College of American Pathologists, 2022). Scale by stain and organ system and re‑validate when scanners, stains, or workflows change—an expectation baked into the CAP update (Evans et al., 2022).
Read more: Predicting Clinical Trial Risks with AI in Real Time
What this means for patients and the service
Patients see faster, more consistent reports because fewer slides bounce back for rescans late in the process. Pathologists spend time on diagnosis rather than chasing artefacts. Lab managers see fewer surprises when scanners drift or when a batch deviates.
Data scientists get cleaner inputs for AI studies. Most importantly, the service grows its ability to prove that digital reads are safe and reliable—on your cases, in your lab, under your governance—exactly as the guideline intends (College of American Pathologists, 2022).
How TechnoLynx can help
TechnoLynx delivers explainable, validation‑ready QC pipelines for WSI. We integrate open‑source tools with cohort‑aware checks and model‑compatibility gates, then present results in a clear, clinical UI so technicians and pathologists act fast.
We set up CAP‑aligned validation (≥60 cases, intra‑observer concordance, washout), bind QC artefacts to each WSI, and produce audit‑ready packs for QA. Our approach keeps pathologists in control, surfaces fixes at source, and prepares labs to adopt diagnostic AI without losing trust.
Read more: Generative AI in Pharma: Compliance and Innovation
References
-
Aiosyn (2024) Automated quality control for digital pathology slides. Available at: https://www.aiosyn.com/automated-quality-control/ (Accessed: 19 September 2025).
-
Brixtel, R. et al. (2022) ‘Whole slide image quality in digital pathology: review and perspectives’, IEEE Access. Available at: https://datexim.ai/wp-content/uploads/2023/03/whole_slide_image_quality_in_digital_pathology_review_and_perspectives.pdf (Accessed: 19 September 2025).
-
CAP TODAY (2021) CAP releases a new evidence‑based guideline. Available at: https://www.captodayonline.com/cap-releases-a-new-evidence-based-guideline/ (Accessed: 19 September 2025).
-
College of American Pathologists (2022) Validating Whole Slide Imaging for Diagnostic Purposes in Pathology (Guideline update). Available at: https://www.cap.org/protocols-and-guidelines/cap-guidelines/current-cap-guidelines/validating-whole-slide-imaging-for-diagnostic-purposes-in-pathology (Accessed: 19 September 2025).
-
Evans, A.J. et al. (2022) ‘Validating whole slide imaging systems for diagnostic purposes in pathology: guideline update’, Archives of Pathology & Laboratory Medicine, 146(4), pp. 440–450. Available at: https://meridian.allenpress.com/aplm/article/146/4/440/464968/Validating-Whole-Slide-Imaging-Systems-for (Accessed: 19 September 2025).
-
Janowczyk, A. (2019) HistoQC: An open‑source quality control tool for digital pathology slides. Journal of Clinical Oncology: Clinical Cancer Informatics. Available at: https://ascopubs.org/doi/pdf/10.1200/CCI.18.00157 (Accessed: 19 September 2025).
-
Image credits: Freepik