For years, ultra-low latency in contribution was packaged as a magic number, something you could just stick in a spec sheet and call “better.” Vendors would quote encode times and leave it at that. The implication was always the same: the lower the number, the better the real-world performance.
But that’s a narrow view, and in some cases, it’s downright misleading. Latency isn’t just an encode number. It is the sum of many stages: capture, frame grabbing, encoding, muxing,s transport, decoding, synchronization and final display. If you only report one of those stages, you’re not telling the whole story.
There’s a reason some practitioners in our industry like to talk about “lying about latency”, it highlights how easy it is to make partial measurements that sound impressive without reflecting operational reality.
In contrast, true contribution workflows demand latency accountability end-to-end. That’s why reaching 220 milliseconds in 1080p60 4:2:2 10-bit (in both HEVC and AVC) without feature loss is an important milestone for TITAN Edge. This isn’t a stripped-down mode with traded-off quality, dropped audio, or reduced throughput. This is a full-function contribution at low delay.
Latency Is More than an Encoder Number
Too often, people celebrate low encode latency as if that solves the whole problem. But real glass-to-glass delay includes many pieces. Camera capture has its own latency before a frame ever reaches the compressor. Once compressed, multiplexers, transport jitter buffers, and network behavior add more delay. Decoders and display pipelines introduce yet additional latency.
If you only measure the encoder, you’re looking at one piece of a much bigger puzzle. Some vendors push low encode numbers while ignoring the rest of the chain, that’s exactly the sort of “latency myth” that gets propagated when you don’t call it out explicitly.
This is where software-centric architectures shine. Because the same implementation handles each codec path consistently, you don’t see codec-specific latency discrepancies caused by different design choices. In traditional hardware approaches, HEVC often appears faster simply because that path was implemented with faster hardware blocks — but that does not reflect codec behavior, it reflects architectural choice. With a software design, both AVC and HEVC follow the same processing model, and latency differences come from the codec algorithms themselves, not from disparate hardware implementations.
Where Latency Accumulates in an SDI-to-SDI Contribution Chain

Quality at Speed — A False Dilemma
There remains a historical reflex: if delay goes down, quality must suffer. That was true back when low delay profiles forced reduced features and heavy compromises.
Today, however, we engineer each profile with defined video quality targets. Maintaining contribution-grade quality at 220 ms compared to conventional reduced-delay modes requires only a modest bitrate adjustment (on the order of a few percent). That means you’re not trading off clarity for speed. You’re getting both.
The “you can only have one” view is outdated. Modern architectures treat speed and quality as parameters to balance intelligently, rather than as opposing forces.
Why 220 ms Matters Operationally
In real production environments, teams are no longer co-located. Remote and centralized workflows are the norm because they reduce costs and improve flexibility. But for these models to work, delay must be short enough that talkback feels natural, replay decisions remain frame-accurate, and operators don’t feel like they’re fighting the clock rather than the play.
Around 220 ms on the encoding side (and approximately 120 ms on decode) keeps the end-to-end chain predictable and professional grade. That means directors, camera operators, audio mixers and VAR officials can work together seamlessly, even when they’re physically apart.
It’s not just a number. It’s about real workflow behavior.
Remote production workflow

Software Freedom Without Compromise
Traditionally, hardware designs reached low latency by dedicating fast blocks of silicon to certain tasks. That often meant only a subset of features, static algorithms, or optimized paths that couldn’t evolve quickly. Those hardware trade-offs get buried in datasheets, but they show up in production.
Software architectures transform that equation.
Latency and density behave like dedicated hardware (stable and predictable) but everything else behaves like software: flexible and evolutive. You get advanced processing like high-quality frame rate conversion, refined P→I conversion, LUT-based HDR normalization and Dolby-E handling for audio layout adaptation, all without pushing latency outside its envelope.
Because this is software defined, improvements happen through evolution, not platform replacement. Systems get better over time, without forcing customers into costly hardware refresh cycles.
This is the true meaning of “no compromise.” It isn’t just about numbers. It’s about features, flexibility and evolution, all at low delay.
Latency as a Design Constant
Once you treat latency as a primary design parameter (not a PR talking point) everything changes. Every software release aims to improve capability while preserving the latency envelope. Density remains stable. Quality remains predictable. Operational behavior remains consistent.
Customers benefit from a living platform, not a carved-in-stone specification.
That’s another way the industry sometimes glosses over reality: implying that lower latency automatically means feature loss, capacity loss or degraded audio. Those trade-offs are choices, not necessities.
Ready for What Comes Next
The stability of latency across codec paths (not just fast numbers in one isolated measurement) is a genuine advantage when you look toward codec evolution, sports workflows, HDR migration, and future distribution demands.
Networks and workflows can evolve, but latency remains controlled. That’s the architecture you want if you’re building long-lived contribution systems that need to support tomorrow’s content expectations.
The Bigger Picture
Lower latency combined with full features and continuous software evolution creates an ideal foundation for:
- Professional contribution
- Remote production
- Centralized production
- VAR and remote officiating workflows
We’re not chasing marketing numbers. We’re engineering workflows.
Latency isn’t a talking point. Latency is a design parameter.
Looking Ahead
If 220 ms becomes the new operational baseline, the next frontier is already visible: around 150 ms profiles for the most demanding live applications.
Because once latency becomes predictable and controlled, it stops being a limitation, and becomes a parameter engineers can shape intentionally.
Explore related insights
About the Author

Solution Marketing Senior Director at Ateme
Julien joined Ateme in 2001, starting in the Hardware Department before moving into Product Management, where he led the launch and evolution of the Kyrion product line.
In 2017, he co-founded the BISS-CA standard with the EBU, reshaping the secure distribution of international live events.
He is currently Solution Marketing Director for Contribution and Distribution, driving partner and customer engagement around the Kyrion and TITAN product lines.