Why does the CCTV system choice affect AI analytics quality? AI video analytics systems process video feeds to detect events — intrusions, abandoned objects, crowd density changes, behaviour anomalies. The quality of the analytics depends as much on the camera system’s characteristics as on the AI model’s capability. Camera resolution, codec, frame rate, lens quality, and network architecture all affect what the AI model receives as input. The most common mistake: selecting cameras based on maximum resolution (4K, 8K) without considering the full pipeline. A 4K camera streaming H.265 at 15 FPS over a congested network produces lower analytics quality than a 1080p camera streaming H.264 at 30 FPS over a dedicated network — because the AI model needs consistent frame delivery more than it needs maximum pixel count. What specifications matter for AI-ready CCTV? Specification Why It Matters for AI Minimum for Analytics Recommended Resolution Object detection accuracy 1080p (1920×1080) 2K–4K Frame rate Motion detection, tracking 15 FPS 25–30 FPS Codec Processing overhead, storage H.264 H.265 with fallback WDR (Wide Dynamic Range) Handles mixed lighting 100 dB 120+ dB IR illumination Night operation 30m range 50m+ range ONVIF compliance Integration with analytics Profile S Profile S + T Edge compute On-camera analytics Not required NVIDIA Jetson or equivalent ONVIF Profile S compliance is critical for integration with third-party AI analytics platforms. Cameras that use proprietary streaming protocols require custom integration work for each camera vendor — a cost multiplier that makes the system difficult to maintain and upgrade. How should the network architecture support AI analytics? Wired CCTV systems use either analogue (coax) or IP (Ethernet) infrastructure. For AI video analytics, IP is required — analogue systems must be digitised before AI processing, adding latency and cost. The network architecture for AI-ready CCTV: Dedicated VLAN: Video traffic should be isolated on a dedicated network segment to prevent bandwidth contention with other traffic. A 4K camera at 30 FPS generates 8–25 Mbps depending on codec and scene complexity. Twenty cameras generate 160–500 Mbps sustained — enough to saturate a shared network segment. PoE+ (Power over Ethernet Plus): Provides power to cameras over the network cable, eliminating separate power runs. PoE+ delivers up to 30W per port — sufficient for most IP cameras including those with IR illumination and heated housings. Edge processing vs centralised: In our deployments, we favour edge processing (analytics on or near the camera) for latency-sensitive applications (real-time alerts) and centralised processing (analytics on a GPU server) for throughput-sensitive applications (post-event search, behaviour analysis across multiple cameras). For details on reducing false positives in AI surveillance systems, our analysis of surveillance false alarm patterns covers the detection tuning methodology. What should you prioritise when selecting a system? For new AI analytics deployments, we recommend prioritising: (1) ONVIF compliance and standard codec support over proprietary features, (2) WDR capability over maximum resolution (most analytics failures come from lighting, not pixel count), (3) 25+ FPS sustained frame rate over burst frame rate, and (4) network architecture that can sustain the aggregate bandwidth of all cameras simultaneously with 30% headroom for future expansion. How do you future-proof a CCTV installation for AI analytics? The cameras installed today will likely serve for 5–10 years. The AI analytics capabilities available in 5 years will exceed today’s capabilities significantly. Future-proofing the camera infrastructure means installing hardware that can support analytics capabilities that do not yet exist. The most future-proof choices: (1) install cameras with higher resolution than currently needed (2K minimum, 4K preferred) — future analytics may extract value from resolution that current models cannot fully utilise; (2) ensure all cameras support H.265 and have sufficient processing power for future codec standards; (3) install network infrastructure with 2–3× the bandwidth required by current camera count — cable runs are expensive to add later; (4) choose cameras with edge compute capability or expansion options, even if edge analytics are not planned initially. Cable infrastructure is the most expensive component to upgrade after installation. Running Cat6A or fibre during initial installation costs marginally more than Cat6 but supports 10 Gbps per run — sufficient for future 8K cameras and edge compute that may require high-bandwidth backhaul. We have seen installations where the cost of re-cabling exceeded the cost of the original camera system. Storage planning for AI analytics differs from traditional CCTV storage planning. Traditional CCTV retains full video for 14–30 days and then deletes. AI analytics systems may need to retain: (1) full video for compliance (14–30 days), (2) detection events with associated video clips indefinitely, (3) training data (annotated frames) permanently. The storage architecture should separate these retention tiers — hot storage for recent full video, warm storage for event clips, cold storage for training data. This tiered approach reduces storage costs by 40–60% compared to retaining full video at the longest retention requirement. Our system designs include a storage budget calculator that projects storage requirements based on camera count, resolution, retention policy, and expected detection event frequency. This projection prevents the common problem of running out of storage 6 months into a deployment and being forced to reduce retention or add emergency storage at premium cost.