Cross-Platform TTS Inference Under Real-Time Constraints: ONNX and CoreML

Cross-platform TTS to iOS, Android and browser stays consistent only if compression is decided at training time — distill once, export to ONNX.

Cross-Platform TTS Inference Under Real-Time Constraints: ONNX and CoreML
Written by TechnoLynx Published on 01 May 2026

What does deploying a TTS model to three platforms actually require?

In a text-to-speech inference optimisation project on edge we ran, separate INT8 quantisation of the same TTS model for CoreML and ONNX Runtime produced audible quality differences at specific phoneme transitions — clean on iOS, a noticeable artefact on Android. The bug was not in the model. It was in the assumption that “INT8” meant the same numerical operation on both runtimes (operational measurement from that deployment; the divergence appeared after the first cross-platform user-acceptance round, not in unit testing). This is the failure mode every multi-runtime TTS deployment hits if the compression decision is made per platform after training rather than as an architectural decision before it.

A text-to-speech model that runs satisfactorily in a research environment needs to produce consistent, real-time output on iOS (CoreML), Android (ONNX Runtime), and browser (WebGL/ONNX.js) targets. This is not a deployment engineering problem in the conventional sense — it is an architecture problem that must be solved before model training is complete.

The three runtimes process floating-point operations differently, apply different quantisation schemes if model compression is used, and have different memory and throughput constraints per device tier. A model that is optimised for one runtime and then ported to the others will behave consistently on the primary target and diverge on the secondary ones. For a TTS model, divergence manifests as quality differences at the quality-compression boundary: a phoneme transition that the model handles correctly on CoreML may produce an audible artefact on the ONNX Runtime version because the two quantisation implementations apply different precision reduction rules to that specific operation.

Why distillation outperforms per-platform quantisation for multi-runtime TTS

The distillation vs quantisation decision is particularly clear for TTS models deployed across three or more runtimes:

Quantisation per platform requires: (1) a CoreML quantisation pass with INT8 or float16 precision; (2) an ONNX Runtime quantisation pass with its own INT8 scheme; (3) a WebGL-compatible quantisation pass that avoids operations the runtime emulates slowly; and (4) validation of audio quality on each platform separately. For TTS, audio quality validation is domain-specific — perceptual quality metrics like MOS (Mean Opinion Score) and PESQ (Perceptual Evaluation of Speech Quality) must be measured on each platform’s output. Three platforms means three independent quality validation cycles, and quality may diverge between them at the compression boundary.

Distillation to a portable model requires: (1) training a smaller student model using knowledge distillation from the larger teacher; (2) exporting the student model once to ONNX; (3) importing the ONNX model into CoreML via CoreML Tools’ ONNX import path; and (4) validating quality once against a reference. The same model artefact runs on all three platforms. Quality at the compression boundary is determined by the distillation training, not by the runtime’s quantisation scheme, and it is consistent across platforms.

The TTS distillation architecture

In the text-to-speech inference optimisation work on edge we carried out for a mobile application requiring real-time speech synthesis, the production requirement was consistent speech quality across iOS, Android, and desktop browser targets within a latency budget that excluded heavy streaming synthesis approaches.

The starting point was a full-size TTS model with high output quality but inference latency significantly above the real-time threshold on mid-range mobile devices. Three compression strategies were evaluated:

Approach iOS latency Android latency Browser latency Quality impact
INT8 quantisation per platform Within budget Within budget Borderline Audible quality divergence between iOS and Android at some phoneme transitions
float16 quantisation (ONNX) Within budget Within budget Exceeds budget on low-end devices Consistent quality, but browser performance insufficient
Knowledge distillation (smaller architecture) Within budget Within budget Within budget on all tested devices Consistent quality across all platforms; marginal quality reduction vs full-size model

The distilled model used a reduced number of encoder and decoder layers with wider hidden dimensions relative to the full-size model — a dimension configuration that maintained prosody and naturalness at the cost of some fine-grained phoneme distinction, which was acceptable for the application. The teacher-student training used a combination of output distribution matching (KL divergence loss on the output spectrogram distribution) and intermediate layer alignment, which preserved the phoneme boundary behaviour that had been problematic in the quantised versions.

ONNX as cross-platform deployment architecture

The deployment pipeline for the distilled model treated ONNX as the canonical deployment format rather than as a one-time conversion target. This meant:

Version-controlled ONNX export. Each training checkpoint was exported to ONNX as part of the model validation pipeline. The ONNX model version was tracked alongside the training checkpoint version, and the ONNX export was validated against the training environment output before the model advanced to the device testing phase.

CoreML import via ONNX. The iOS deployment used CoreML Tools’ onnx.convert() path to import the ONNX model into CoreML format. This maintained a single model artefact (the ONNX file) as the source of truth and eliminated the need for a separate CoreML training or export step.

Runtime compatibility testing. The ONNX model was tested against multiple ONNX Runtime versions to validate that the exported operations were supported consistently. Operations that required ONNX opset versions above the deployment target’s support level were identified and replaced with equivalent operations in the lower opset.

The result was a deployment pipeline where a single distilled ONNX model produced consistent TTS output on all three target platforms, validated against a single quality baseline rather than three separate ones. The latency on the lowest-specification test devices in the deployment cohort was within the real-time synthesis budget.

The runtime portability implication

The ONNX Runtime and CoreML are not equivalent execution environments. Specific neural network operations (custom attention mechanisms, non-standard activation functions, complex control flow) may be supported on one and not the other, or supported with different numerical precision. This means model architecture decisions made early in training — before the cross-platform deployment requirement has been specified precisely — can create porting problems later.

The equivalent of the device capability baseline for multi-runtime TTS deployment is a runtime operation compatibility audit: before finalising the model architecture, validate that all operations used by the model are supported natively (not emulated) by all target runtimes at their target precision level. Operations that are emulated add disproportionate latency on low-capability devices and are a common source of budget overruns in cross-platform inference projects.

The operation compatibility audit checklist

The audit is the lowest-cost step in the entire cross-platform deployment workflow and the one most often skipped. The checklist below names the operations and runtime properties that should be validated before architecture is locked in. Each item is a yes/no check against the documented opset and feature support of each target runtime version.

# What to check Where it bites
1 ONNX opset version supported by the lowest target runtime. Identify the minimum opset version supported by the oldest ONNX Runtime version in the device cohort, the WebGL/WebGPU ONNX.js version, and the CoreML Tools ONNX import path. The model must export to that opset or below. A model exported at opset 17 will fail to load on a runtime that supports up to opset 14. Discovered late, this forces either an export downgrade (potentially regressing quality) or dropping the affected device tier.
2 Custom attention mechanisms. Validate that the attention pattern (standard scaled dot-product, multi-query, grouped-query, sliding-window, sparse) maps to a supported operation on each runtime. CoreML and ONNX Runtime have diverged on which attention variants they implement natively. Custom attention falls back to a per-element implementation and inference latency multiplies by 5–20× on the affected runtime.
3 Activation functions outside the standard set. ReLU, sigmoid, tanh, GELU (approximate and exact variants), SiLU/Swish are widely supported. Custom activations (Mish, Snake, learned activations) are not. Custom activation falls back to a graph of primitive operations, with both latency and numerical precision implications.
4 Normalisation operations. LayerNorm and BatchNorm are universally supported. RMSNorm, GroupNorm, and conditional normalisation variants are not uniformly supported across CoreML versions. Unsupported normalisation breaks model export entirely or produces a fallback that violates the latency budget.
5 Dynamic shape support. TTS models often have variable-length input (text) and variable-length output (audio frames). Validate that each runtime supports dynamic axes for the relevant tensor dimensions. Static-shape-only runtime requires either bucketed input padding (latency overhead, quality implications at boundary) or per-shape model export (deployment artefact explosion).
6 Control flow operations. If, Loop, and Scan operations are supported variably across runtimes. Autoregressive decoding loops that use Loop for variable-length output may not export cleanly. Loop fallback runs the body on host code rather than the runtime, producing per-step host-runtime transition overhead.
7 Numerical precision per operation. Validate that float16 (or the chosen reduced precision) is supported for every operation in the model on every target runtime. Some runtimes promote to float32 for specific operations, with bandwidth and latency implications. A model designed around float16 throughput finds half its operations promoted to float32 silently, with the latency budget exceeded as a consequence.
8 Tensor layout assumptions. NCHW vs NHWC layout, and per-runtime conversion costs for layout transitions. CoreML prefers a specific layout that may not match the ONNX export default. Hidden layout transposes inside the inference graph add latency that does not appear in profiling against the development environment.
9 Batched vs streaming inference paths. TTS in production may run as either batched synthesis (full sentence at once) or streaming synthesis (chunk by chunk). Validate that the chosen path is performant on each runtime; streaming synthesis exposes per-chunk overhead that batched synthesis amortises. A model architected for batched synthesis on the development environment may be unworkable as streaming synthesis on the deployment runtime, forcing a late architecture change.
10 CoreML version coverage of the iOS device cohort. Model export to a CoreML version requires that minimum iOS version on the deployed devices; older iOS versions cannot load newer CoreML format versions. A model targeted at the latest CoreML format excludes a meaningful fraction of the active device base.

A model architecture that passes this audit cleanly can be developed against any one runtime and ported to the others with minimal surprise. A model that fails one or more items at the audit stage should be modified at the architecture stage rather than at the deployment stage — the modification is cheaper before training is complete than after.

What remained imperfect

The distilled cross-platform TTS deployment met its quality and latency targets, but two limitations remain worth naming:

First, the distilled model is measurably lower quality than the full-size teacher on certain speech features — specifically, fine-grained prosody control on long sentences and the realism of expressive speech (emphatic delivery, emotional inflection). The application accepted this trade-off because the use case prioritised latency and platform consistency over peak quality, but a deployment that cannot accept any quality reduction would need a different approach: a server-side fallback for the highest-quality requests, a per-platform optimised pipeline at the cost of consistency, or a larger student model at the cost of latency on lower-tier devices.

Second, even with ONNX as the canonical deployment format, the runtime versioning matrix continued to require maintenance over time. New ONNX Runtime versions, new CoreML versions, and new browser engine versions periodically introduced changes in operation support, numerical behaviour, or default precision. Each required a regression test pass against the deployment cohort. The single-artefact deployment reduced the per-release cost compared to a per-platform-quantised approach, but it did not eliminate the cross-platform validation cost — it shifted that cost from the deployment phase to the maintenance phase.

For teams approaching this architecture for the first time, a GPU and Inference Optimisation Assessment evaluates the model architecture and compression strategy against the runtime compatibility requirements of the target deployment environment.

Production Anomaly Detection in Video Data Pipelines: A Generative Approach

Production Anomaly Detection in Video Data Pipelines: A Generative Approach

1/05/2026

Generative models trained on normal frames detect rare video anomalies without labelled anomaly data — reconstruction error is the score.

Designing Observable CV Pipelines for CCTV: Modular Architecture for Security Operations

Designing Observable CV Pipelines for CCTV: Modular Architecture for Security Operations

30/04/2026

Operators stop trusting CV alerts when the pipeline is opaque. Observable, modular CCTV pipelines decompose decisions into auditable stages.

The Unknown-Object Loop: Designing Retail CV Systems That Improve Operationally

The Unknown-Object Loop: Designing Retail CV Systems That Improve Operationally

30/04/2026

Retail CV deployments meet products outside the training catalogue. The architectural choice: silent misclassification or a designed review loop.

Why Client-Side ML Projects Miss Latency Targets Before Deployment

Why Client-Side ML Projects Miss Latency Targets Before Deployment

29/04/2026

Client-side ML misses latency targets when the device capability baseline is set after architecture selection rather than before. Sequence matters.

Building a Production SKU Recognition System That Degrades Gracefully

Building a Production SKU Recognition System That Degrades Gracefully

29/04/2026

Graceful degradation in production SKU recognition is an architectural property: predictable automation rate as the catalogue grows.

Distillation vs Quantisation for Multi-Platform Edge Inference: How to Choose

Distillation vs Quantisation for Multi-Platform Edge Inference: How to Choose

28/04/2026

Distillation and quantisation both shrink models for edge inference, but for three-or-more platforms only distillation keeps quality consistent.

GPU-Accelerating RF Signal Propagation Simulation: From Days to Hours

GPU-Accelerating RF Signal Propagation Simulation: From Days to Hours

28/04/2026

Naive GPU porting of sequential RF simulation delivers modest gains. Algorithmic redesign to expose parallelism turns multi-day runtimes into hours.

Why AI Video Surveillance Generates False Alarms — And What Pipeline Architecture Reduces Them

Why AI Video Surveillance Generates False Alarms — And What Pipeline Architecture Reduces Them

28/04/2026

Surveillance false alarms are an architecture problem, not a sensitivity setting. Modular pipelines reduce them; monolithic ones cannot.

Why Computer Vision Fails at Retail Scale: The Compound Failure Class

Why Computer Vision Fails at Retail Scale: The Compound Failure Class

28/04/2026

CV models that pass accuracy tests at 500 SKUs fail in production above 1,000 — not from one cause but from four simultaneous failure axes.

When to Build a Custom Computer Vision Model vs Use an Off-the-Shelf Solution

When to Build a Custom Computer Vision Model vs Use an Off-the-Shelf Solution

26/04/2026

Custom CV models are justified when the domain is specialised and off-the-shelf accuracy is insufficient. Otherwise, customisation adds waste.

How to Deploy Computer Vision Models on Edge Devices

How to Deploy Computer Vision Models on Edge Devices

25/04/2026

Edge CV trades accuracy for latency and bandwidth savings. Quantisation, model selection, and hardware matching determine whether the trade-off works.

What ROI Computer Vision Actually Delivers in Retail

What ROI Computer Vision Actually Delivers in Retail

24/04/2026

Retail CV ROI comes from shrinkage reduction, planogram compliance, and checkout automation — not AI dashboards. Measure what changes operationally.

Data Quality Problems That Cause Computer Vision Systems to Degrade After Deployment

23/04/2026

CV system degradation after deployment is usually a data problem. Annotation inconsistency, domain shift, and data drift are the structural causes.

How Computer Vision Replaces Manual Visual Inspection in Pharmaceutical Quality Control

23/04/2026

CV-based pharma QC inspection is a production engineering problem, not a model accuracy problem. It requires data, validation, and pipeline design.

How to Architect a Modular Computer Vision Pipeline for Production Reliability

22/04/2026

A production CV pipeline is a system architecture problem, not a model accuracy problem. Modular design enables debugging and component-level maintenance.

Machine Vision vs Computer Vision: Choosing the Right Inspection Approach for Manufacturing

21/04/2026

Machine vision is deterministic and auditable. Computer vision is adaptive and generalisable. The choice depends on defect complexity, not preference.

Why Off-the-Shelf Computer Vision Models Fail in Production

20/04/2026

Off-the-shelf CV models degrade in production due to variable conditions, class imbalance, and throughput demands that benchmarks never test.

Deep Learning Models for Accurate Object Size Classification

27/01/2026

A clear and practical guide to deep learning models for object size classification, covering feature extraction, model architectures, detection pipelines, and real‑world considerations.

Mimicking Human Vision: Rethinking Computer Vision Systems

10/11/2025

Why computer vision systems trained on benchmarks fail on real inputs, and how attention mechanisms, context modelling, and multi-scale features close the gap.

Visual analytic intelligence of neural networks

7/11/2025

Neural network visualisation: how activation maps, layer inspection, and feature attribution reveal what a model has learned and where it will fail.

AI Object Tracking Solutions: Intelligent Automation

12/05/2025

Multi-object tracking in production: handling occlusion, re-identification, and real-time latency constraints in industrial and retail camera systems.

Automating Assembly Lines with Computer Vision

24/04/2025

Integrating computer vision into assembly lines: inspection system design, detection accuracy targets, and edge deployment considerations for manufacturing environments.

The Growing Need for Video Pipeline Optimisation

10/04/2025

Video pipeline optimisation: how encoding, transmission, and decoding decisions determine real-time computer vision latency and processing throughput at scale.

Smarter and More Accurate AI: Why Businesses Turn to HITL

27/03/2025

Human-in-the-loop AI: how to design review queues that maintain throughput while keeping humans in control of low-confidence and edge-case decisions.

Optimising Quality Control Workflows with AI and Computer Vision

24/03/2025

Quality control with computer vision: inspection pipeline design, defect detection architectures, and the measurement factors that determine false-reject rates in production.

Inventory Management Applications: Computer Vision to the Rescue!

17/03/2025

Computer vision for inventory counting and tracking: how shelf-state monitoring, object detection, and anomaly detection reduce manual audit overhead in warehouses and retail.

Explainability (XAI) In Computer Vision

17/03/2025

Explainability in computer vision: how saliency maps, attention visualisation, and interpretable architectures make CV models auditable and correctable in production.

The Impact of Computer Vision on Real-Time Face Detection

10/02/2025

Real-time face detection in production: CNN architecture choices, detection pipeline design, and the latency constraints that determine deployment feasibility.

Case Study: Large-Scale SKU Product Recognition

10/12/2024

Hierarchical SKU classification using DINO embeddings and few-shot learning — above 95% accuracy at ~1k classes, above 83% at ~2k.

Case Study: WebSDK Client-Side ML Inference Optimisation

20/11/2024

Browser-deployed face quality classifier rebuilt around a single multiclassifier, WebGL pixel capture, and explicit device-capability gating.

Streamlining Sorting and Counting Processes with AI

19/11/2024

Learn how AI aids in sorting and counting with applications in various industries. Get hands-on with code examples for sorting and counting apples based on size and ripeness using instance segmentation and YOLO-World object detection.

Case Study: Share-of-Shelf Analytics

20/09/2024

Per-shelf share-of-shelf measurement in area and count modes, with unknown-product handling treated as a first-class operational output.

Case Study: Smart Cart Object Detection and Tracking

15/07/2024

In-cart perception for autonomous retail checkout: detection, tracking, adaptive FPS sampling, and a session-scoped cart-state model.

The AI Innovations Behind Smart Retail

6/05/2024

How computer vision powers shelf monitoring, customer flow analysis, and checkout automation in retail environments — and what integration actually requires.

The Synergy of AI: Screening & Diagnostics on Steroids!

3/05/2024

Computer vision in medical imaging: how AI systems accelerate screening and diagnostic workflows while managing the false-positive rates that determine clinical acceptance.

A Gentle Introduction to CoreMLtools

18/04/2024

CoreML and coremltools explained: how to convert trained models to Apple's on-device format and deploy computer vision models in iOS and macOS applications.

Computer Vision for Quality Control

16/11/2023

Let's talk about how artificial intelligence, coupled with computer vision, is reshaping manufacturing processes!

Computer Vision in Manufacturing

19/10/2023

Computer vision in manufacturing: how inspection systems detect defects, verify assembly, and measure dimensional tolerances in real-time production environments.

Case Study: Barcode Detection for Autonomous Retail

15/10/2023

Camera-based barcode pipeline for in-cart capture: YOLO localisation, ensemble decoding, multi-frame polling — 86.7% vs Dynamsoft 80%.

Back See Blogs
arrow icon