CASE STUDY

Share-of-Shelf Analytics

For a multinational startup operating in North American grocery retail, we built a shelf analytics system that measures share-of-shelf in two modes β€” area-based and count-based β€” segmented per shelf rather than per rack. Unknown products are surfaced as a controlled output instead of hidden in a catch-all bucket.

Shelf Analytics Share-of-Shelf Object Segmentation Active Learning Loop

The Challenge

Share-of-shelf measurement sounds like a reporting problem. In practice it is a recognition problem: you can only measure shelf share for products you can identify, and a real retail shelf always contains products you cannot identify β€” competitor SKUs, new listings, store-brand variants, and regional items that were never in the training set. A system that ignores unknowns produces systematically inflated share numbers. A system that counts them honestly needs a way to deal with them operationally.

No universal representation of a shelf.

A gondola rack has multiple shelves. Aggregating share across the whole rack produces numbers that hide per-shelf performance differences β€” a brand can have strong overall share while being absent from the eye-level shelves that drive most purchases. The system needed per-shelf segmentation, not just rack totals.

Unknown products are structurally unavoidable.

Competitor products and unregistered SKUs cannot be classified without labelling. Treating them as noise produces inflated share numbers. Treating them as a separate class without labelling produces meaningless counts. The design had to acknowledge unknowns as a first-class output, not an edge case to suppress.

Raw counts trigger precision disputes.

Reporting "21 facings" versus "20 facings" creates unproductive arguments about measurement precision β€” is the difference a model error or a real shelf change? Share-of-shelf as a ratio is more decision-useful: a 3% share movement is actionable regardless of whether the absolute count was 21 or 20.

Stocked retail shelves with multiple product brands occupying overlapping shelf space

Project Timeline

From shelf segmentation to a self-improving measurement system that handles unknown products honestly

Shelf Detection & Segmentation

Built the shelf detection layer to separate individual shelf bands within a rack image β€” a prerequisite for per-shelf share computation rather than rack-level aggregation.

Implemented product detection and segmentation within the shelf product space. Defined the measurement boundary: what constitutes the shelf area to be measured versus fixtures, labels, and whitespace.

Product Space Detection

Dual-Mode SoS Measurement

Implemented both area-based SoS (pixel area owned by each brand relative to total product space) and count-based SoS (facing counts per brand). Both modes output per-shelf figures, not just rack totals.

Unknown products β€” anything not in the training catalogue β€” are explicitly surfaced as a percentage of the total product space rather than silently excluded. This gives operators a live signal of catalogue coverage and triggers the labelling workflow for new items.

Unknown-Product Handling

Supervisor Label β†’ Retrain Loop

Surfaced unknowns feed directly into a labelling queue. A supervisor labels the unknown products; the labelled examples are added to the training set and the model is retrained. The design intent: each labelling cycle feeds back into retraining, progressively reducing the unknown percentage and expanding catalogue coverage β€” turning the system's own measurement gaps into a controlled dataset growth mechanism.

The Solution

The core design insight: measuring share-of-shelf accurately requires acknowledging what you cannot measure. We built a system that is honest about its recognition boundaries β€” and uses that honesty as the engine for expanding them.

Per-Shelf Segmentation

Rack-level share figures hide the placement performance that drives purchasing decisions. A brand can hold strong overall share while being absent from the eye-level shelves that actually convert. Shelf-band detection separates individual shelves within a rack before any share computation β€” a prerequisite for per-shelf analytics, not a refinement.

Dual-Mode Share Measurement

Different decisions need different measures. Area-based share β€” brand surface relative to total product space β€” is most useful for shelf-space negotiation. Count-based share (facings) is most useful for facing-count tracking and operational compliance reporting. Reporting both modes side by side prevents the choice of measurement from quietly hiding a compliance gap.

Unknown-Object as Explicit Output

A real shelf always contains products the model has not seen. Hiding those in a catch-all class corrupts the share ratio; ignoring them inflates the share of recognised brands. We surface unknown product space as a first-class percentage output, and that signal triggers the labelling queue and retraining cycle. This kind of honest boundary handling is a recurring theme across our computer vision work.

Technical Specifications

Area-based SoS Brand pixel area / total product-space pixel area, per shelf
Count-based SoS Brand facings / total facings, per shelf
Unknown output % of product space unrecognised β€” surfaced explicitly, not excluded
Trigger Unknown product area exceeds operator-defined threshold
Loop Surface unknowns β†’ supervisor labels β†’ add to training set β†’ retrain
Effect Design intent: each labelling cycle expands catalogue coverage and reduces unknown %
Per-class SoS tracker Majority of recognised classes within low single-digit % differences; outliers reaching low teens
Tracking artifact Per-class SoS percentage-difference spreadsheet tracked continuously across test runs
Shopper browsing branded packaged goods on a grocery store shelf

The Outcome

The system produced per-shelf share-of-shelf measurements in both area and count modes, with per-class percentage differences tracked continuously across test runs. For most recognised classes, those differences stayed in the low single digits; outliers reached into the low teens. Surfacing unknown product space as an explicit operational output β€” rather than absorbing it into a catch-all class β€” gave operators a live catalogue-coverage signal, and the retraining loop was designed as the mechanism for expanding coverage β€” surfacing the gap so each labelling cycle could close it.

Two boundaries are worth naming. The percentage-difference figures track per-class measurement variability across runs; they are not a planogram-compliance number, and the system was not asked to deliver a planogram diff. And the value of the unknown-handling design depends on the supervisor labelling cycle actually being run β€” the architecture exposes the gap; closing it is an operational commitment. This workstream sits alongside in-cart object detection, product recognition, barcode detection, and security as part of a broader multi-year smart retail engagement.

Key Achievements

Per-shelf SoS measurement in both area and count modes β€” rack-level aggregation insufficient for actionable compliance data

Unknown product space surfaced as an explicit percentage output β€” not excluded or silently counted as zero

Active labelling loop: unknowns β†’ supervisor labels β†’ retrain, designed to progressively expand catalogue coverage

Per-class SoS percentage-difference tracker β€” majority of classes in low single digits, outliers in the low teens

SoS-as-ratio reporting deliberately chosen over raw counts to avoid precision disputes that obscure real compliance signals

Related Capabilities

Computer Vision Services

Our services feature expertise in classical computer vision, human-supervised system design for legal compliance, video pipeline optimisation with tools like FFmpeg, custom adaptable models, and explainable AI for ethical transparency.

Computer vision

Retail AI Solutions

We build production-ready CV systems for smart retail environments β€” in-cart perception, shelf analytics, SKU recognition, and security β€” all deployable on existing camera infrastructure without costly hardware upgrades.

Retail

GPU Performance Engineering

We deliver GPU-accelerated inference pipelines optimised for constrained edge hardware and high-throughput server deployments β€” profiling-led, architecture-first, with measurable performance outcomes.

GPU

Building a Shelf Analytics System?

Shelf analytics is a measurement-design problem before it is a recognition problem. The decisions about what to measure, how to handle what cannot yet be recognised, and which signals to surface to operators shape every downstream number.