The accuracy gap between lab and CCTV Facial recognition vendors report accuracy rates of 99%+ on benchmark datasets (LFW, MegaFace, NIST FRVT). These benchmarks use cooperative subjects, controlled lighting, frontal-facing poses, and high-resolution images. Production CCTV environments provide none of these conditions. Facial recognition accuracy drops 10–40% between controlled enrollment conditions and production CCTV — angle, lighting, and resolution are the primary degradation factors. This isn’t a model quality issue. It’s a physics and deployment issue. The same algorithm that achieves 99.7% on NIST FRVT may achieve 65–80% in a real CCTV corridor with overhead angles, mixed lighting, and 720p resolution at 15 metres. The three degradation factors Factor Lab condition CCTV reality Impact on accuracy Angle Frontal (±15°) 30–60° overhead, oblique 15–25% accuracy reduction at >30° off-axis Lighting Uniform, consistent Variable (natural + artificial, shadows, backlight) 10–20% reduction under mixed/backlit conditions Resolution 100+ pixels between eyes 20–40 pixels between eyes at typical camera distances Below 40 inter-pupillary pixels, recognition becomes unreliable These factors compound. A subject at 30° angle, under mixed lighting, at 25 inter-pupillary pixels may produce a match confidence below any operationally useful threshold — even when the same subject at enrollment produced a near-perfect template. What makes facial recognition work in production Deployments that maintain useful accuracy in real conditions share characteristics: controlled enrollment (high-quality frontal images), camera positions chosen for facial capture (not general surveillance), illumination specifically designed for face imaging (IR illuminators), and narrow operating distances (access gates, not open corridors). Observable CV pipeline architecture allows facial recognition to function as one signal in a multi-stage pipeline — where a face match contributes confidence alongside other identifiers (gait, clothing, badge) rather than serving as the sole identification mechanism. This architectural approach tolerates individual-stage inaccuracy because no single stage bears the full decision weight. The operational implication Teams considering facial recognition for CCTV applications should test with their actual camera infrastructure, at actual operating distances, under actual lighting conditions — before making deployment commitments. Vendor demonstrations using cooperative subjects at 2-metre distance under ring lighting tell you nothing about the system’s performance on your 15-metre corridor cameras at ceiling height.