Why vision systems in manufacturing are not all alike Deploying a vision system on a production line is not a matter of pointing a camera at parts and training a model. The core architectural decision — inline versus offline inspection — shapes everything downstream: hardware selection, integration complexity, throughput requirements, and ultimately, what defects you can realistically catch. Getting this decision wrong early means retrofitting later. This article covers the practical engineering decisions for manufacturing vision systems: where inline inspection makes sense, when offline is the right choice, how to select camera hardware, and what PLC integration actually involves. How do inline and offline inspection compare? Inline inspection places the camera in the production stream. Every part is inspected as it moves through the line, at line speed, with no additional handling. The tradeoff: you must work within the constraints of the process — part orientation may be variable, vibration from adjacent machinery affects image quality, and cycle time determines maximum exposure. Offline inspection routes parts to a dedicated inspection station. This allows controlled lighting, fixed part orientation, and longer exposure times. The tradeoff: it adds a handling step, introduces latency between production and reject detection, and typically inspects a sample rather than 100% of parts. In our experience, inline inspection is the right choice when: Part defects that escape detection have downstream consequences (assembly failures, warranty claims) Production rates exceed what sampling can cover reliably The defect signature is visually distinct and stable (dimensional variation, surface contamination, colour deviation) Offline inspection is appropriate when: Inspection requires multi-axis imaging (top, bottom, sides) that cannot be achieved inline Parts are too complex or variable in orientation to image reliably at speed The primary goal is process monitoring rather than 100% sorting Camera selection: line-scan vs area cameras The choice between line-scan and area cameras is driven by part geometry and motion characteristics. Parameter Line-Scan Camera Area Camera Best for Continuous web, cylindrical parts, fast conveyors Discrete parts, stationary or slow-moving targets Resolution Very high in scan direction; unlimited length Fixed sensor resolution Throughput High — single line read per encoder tick Limited by frame rate and exposure Cost Higher; requires encoder synchronisation Lower; simpler integration Motion sensitivity Designed for motion; requires consistent speed Requires part to be stationary or uses strobed lighting Calibration complexity Higher — requires flat-field correction Lower Line-scan cameras are the standard choice for web inspection (film, foil, textiles) and for imaging cylindrical parts that rotate past the sensor. For most discrete-part inspection on conveyors, area cameras with strobe lighting are simpler and sufficient. Frame rate requirements for area cameras: at a conveyor speed of 1 m/s with a desired spatial resolution of 0.5 mm/pixel, you need the part to travel less than one pixel between frames. At 0.5 mm per pixel, that means frame intervals under 0.5 ms, or frame rates above 2000 fps — which is impractical. The correct approach is strobe synchronisation: a short flash (50–200 µs) freezes motion regardless of frame rate, provided the strobe duration is short enough relative to conveyor speed. Illumination is not optional Across deployments, illumination is the most commonly underspecified component and the most frequent cause of late-stage project failure. The model cannot compensate for poor image contrast. Illumination choices that matter: Backlighting: best for silhouette-based dimensional checks; reveals holes, edge profiles Coaxial lighting: best for specular surfaces (metal, glass); reveals surface scratches by disrupting uniform reflection Ring lighting: general purpose; shadows obscure surface defects on curved parts Structured light (line lasers): required for height/3D measurement Specify illumination before specifying the camera. The camera selection follows from the image you need to capture. PLC integration and rejection mechanisms A vision system that detects defects but cannot act on them has limited value. Integration with the PLC (programmable logic controller) is what closes the loop: the vision system signals a reject, the PLC activates a diverter, pusher, or air blast to remove the part from the line. Typical integration architecture: Vision controller outputs a pass/fail signal (digital I/O) or a structured result (over Ethernet/IP, PROFINET, or EtherCAT depending on PLC vendor) PLC receives the signal, calculates part position using encoder tracking, and activates the rejection mechanism when the part reaches the diverter Rejection confirmation sensor (typically a photoeye after the diverter) confirms the part was removed The latency budget is tight on high-speed lines. A part travelling at 1 m/s covers 1 mm every millisecond. If the diverter is 500 mm downstream of the camera, the PLC has 500 ms to act — which sounds comfortable, but total latency (image capture + inference + I/O + PLC scan cycle + diverter actuation) must fit within this window. In our experience, total system latency of 50–100 ms is achievable with a well-configured setup; 200+ ms requires increasing the camera-to-diverter distance. Realistic rejection rates Vision systems are sold on “zero defect” promises. The reality is more nuanced: False reject rate (FRR): Good parts classified as defective. Typically 0.1–2% depending on part variability and inspection difficulty. FRR directly costs material and line throughput. False accept rate (FAR): Defective parts passing inspection. The number that matters to your customer. Target varies by industry: automotive typically requires FAR below 10 ppm; consumer goods may tolerate 100–500 ppm. Interaction between FRR and FAR: Tightening the classifier threshold reduces FAR but increases FRR. The operating point is a business decision, not a purely technical one. Benchmark your system against a manual inspection baseline before deploying. If manual inspection achieves 95% detection at 2% false reject, a vision system should outperform both numbers — otherwise the economics do not justify the capital cost. Typical well-configured vision systems achieve 99%+ detection with FRR under 0.5% for visually distinct defects on consistent parts. Checklist: vision system readiness before deployment Defect library defined with representative samples of every defect type and severity level Illumination selected and validated — images show consistent contrast across all defect types Camera and lens sized for required resolution at inspection distance Trigger mechanism (encoder, photoeye) synchronised and tested at line speed PLC integration tested with simulated pass/fail signals before camera integration Rejection mechanism physically tested and confirmed at maximum line speed False reject and false accept rates measured on held-out test set before go-live Operator interface for reviewing rejected parts and false reject recovery defined Where projects typically fail In our experience, the most common failure modes are illumination instability (ambient light changes between shifts), insufficient defect sample coverage during training (models fail on defect variants they have not seen), and inadequate PLC integration testing (rejection timing errors become apparent only at production speed). The vision system is a sub-component of a larger quality control process. Its effectiveness depends on how it is integrated into operator workflows, how rejected parts are reviewed, and how model performance is monitored over time as parts and processes evolve.