The model did not get worse — the data changed
A computer vision system that performed reliably for three months starts producing more false positives. The engineering team’s first response: check the model. Is it corrupted? Did an update go wrong? Was there a configuration change? Usually, the model is identical to the one that was performing well. What changed is the data — the images arriving at the model’s input are no longer drawn from the same distribution that the model was trained and validated on.
This pattern — stable model, shifting data, degrading performance — is the dominant failure mode for production computer vision systems. It is also the most under-monitored, because most CV deployment teams invest heavily in model evaluation at deployment time and minimally in data monitoring after deployment. The model is treated as the intelligent component that might fail; the data is treated as a passive input that is assumed to be stable. That assumption is almost always wrong.
A 2022 Google Research study found that data quality issues account for more model failures in production than algorithmic limitations. Sambasivan et al. (2021, ‘Everyone wants to do the model work, not the data work’) documented that data cascades — compounding data quality issues — affected 92% of surveyed AI practitioners.
Why does annotation inconsistency set an invisible ceiling?
The quality ceiling of any supervised computer vision model is set by the quality of its training labels. If two annotators examine the same image and disagree on whether it contains a defect — or on the defect boundary, or on the defect classification — the model learns that disagreement. The result is a model whose behaviour in ambiguous cases reflects the noise in the labelling process rather than a coherent decision criterion.
Inter-annotator agreement is measurable (Cohen’s kappa, Fleiss’ kappa for multiple annotators) but rarely measured in practice. We have reviewed annotation pipelines where three annotators produced agreement rates below 70% on boundary cases — meaning the model was being trained on data where the “ground truth” was effectively a coin flip for nearly a third of difficult examples. The model’s reported accuracy on a held-out set reflected this noise: high accuracy on easy cases, near-random performance on boundary cases, and an overall metric that looked acceptable but masked a systematic weakness.
The fix is not more annotations — it is better annotation protocols. Explicit criteria for boundary cases (at what size does a scratch become a defect? what level of discolouration counts as contamination? where exactly is the boundary of an anomalous region?), calibration exercises where annotators align on edge cases before production labelling begins, and ongoing agreement monitoring that flags drift in annotator behaviour over time. These are data engineering tasks, not ML engineering tasks — and they determine the model’s performance ceiling more than any architectural choice.
Domain shift: training conditions ≠ production conditions
Domain shift occurs when the production environment differs systematically from the training environment. The model learned features optimised for the training distribution — specific lighting conditions, camera angles, background characteristics, product appearances — and those features transfer imperfectly to a distribution that differs along any of these dimensions.
The sources of domain shift in production computer vision are predictable:
Camera and optics changes. A lens replacement, a camera firmware update, a cleaning schedule change, or physical repositioning of the camera system changes the image characteristics in ways that may be invisible to human inspection but measurable in the image statistics that the model relies on. A ResNet trained on images with one lens distortion profile will produce different feature activations when the lens is replaced, even if the human-visible content is identical.
Lighting degradation. Industrial lighting degrades over time — bulb output decreases, colour temperature shifts, and reflector efficiency drops. The degradation is gradual enough that human operators may not notice it, but the statistical properties of the images change measurably. A model calibrated under fresh lighting will experience a slow accuracy drift as the lighting ages, and the drift may not cross an alert threshold until it has accumulated enough to affect production outcomes.
Product evolution. In retail and manufacturing environments, the products being inspected change over time — new packaging designs, new product variants, seasonal product mixes. Each change introduces visual characteristics that the model may not have seen during training. The off-the-shelf model failure patterns are particularly acute here: a model trained on last quarter’s product mix may fail on this quarter’s new variant.
Data drift: the slow degradation
Data drift is the gradual change in the production data distribution over time, without a single identifiable cause. It is the accumulation of small environmental changes — lighting aging, camera positioning micro-shifts, seasonal variations, process parameter changes in manufacturing — that collectively shift the production data away from the training distribution.
The challenge with data drift is that no single change triggers an alert. Each individual shift is within tolerance. The cumulative effect crosses a threshold only after weeks or months of gradual degradation — at which point the model’s production performance may have declined significantly without any single monitoring signal indicating when the decline began.
Detecting data drift requires statistical monitoring of the production data distribution: tracking the statistical properties of the model’s input data (pixel intensity distributions, feature activation distributions, preprocessing output statistics) against reference baselines from the training data. Our recommendation is to implement drift detection at the pipeline’s preprocessing stage where distribution shifts are most measurable, using statistical tests (KL divergence, Population Stability Index, or simpler distributional comparisons) that flag when the production distribution has moved beyond a documented tolerance from the training reference.
The feedback loop that most teams skip
The standard CV deployment lifecycle is: collect data → label data → train model → evaluate → deploy → monitor accuracy. What is usually missing is the feedback loop: route production failures back to the training pipeline as new training data.
Production failures — false positives reviewed and corrected by human operators, false negatives discovered through downstream quality checks, edge cases flagged for review — are the most valuable training data the system produces. They represent exactly the cases where the model is weakest, in the exact conditions where the model operates. Incorporating these cases into the training pipeline (with appropriate annotation quality controls) produces a model that improves specifically in the areas where it is failing.
This feedback loop requires infrastructure: a mechanism to capture production failures, a pipeline to label them with quality-controlled annotations, and a retraining schedule that incorporates the new data without losing performance on cases the model already handles well. The infrastructure cost is non-trivial. The alternative — retraining on the original dataset whenever performance degrades — is a pattern that produces a model that is perpetually optimised for the past rather than adapted to the present.
Building data quality into the deployment, not after it
Data quality is not a pre-deployment task that can be checked off and forgotten. It is an ongoing operational concern that requires monitoring infrastructure, annotation quality processes, and feedback loops that persist for the lifetime of the production system.
The data readiness assessment before deployment establishes the baseline: is the training data representative of the production environment, is the annotation quality sufficient, is the class distribution reflective of production conditions? The monitoring infrastructure after deployment tracks drift from that baseline. The feedback loop continuously improves the baseline as the production environment evolves.
If your computer vision system is experiencing accuracy degradation after deployment and the root cause investigation has focused on the model rather than the data, a Production CV Readiness Assessment includes data quality diagnostics — annotation consistency analysis, distribution shift measurement, and feedback loop design — as core components. Our computer vision practice treats data quality as the primary determinant of production reliability.