Frequently Asked Questions

This information is intended to address some of the questions that often come up and to provide a better understanding of ATEME’s products, solutions and the video delivery ecosystem. It provides basic information, sometimes about complex topics, with links to more detailed information.

We invite you to contact us if you require more in-depth knowledge.

Select the topic:

How to meet audio loudness regulations?

ITU standardized loudness measurement in ITU-R BS.1770 with a unit named LKFS. This definition was leveraged to define upper limits by regulation bodies like the US FCC with CALM act referencing ATSC A/85, or the French CSA referencing EBU R 128. Solutions like Dolby Pro Loudness Correction allow respecting such regulations in all audio formats, by either correcting the dialnorm metadata (for DD/DD+) or normalizing the audio itself (for other audio formats).e

How to organize an Audio Description workflow?

Audio Description provides blind and visually impaired audience with a narrator voice describing what is happening on the screen. It is typically delivered by production in the form of an additional stereo track, where first channel should be mixed in the normal audio, according to cue tones in the second channel, usually during natural pauses of dialogue. This mix can happen on the receiver side which saves audio bandwidth, or on the broadcaster side which lowers the STB impact since narrative track is pre-mixed.

How to provide a true 5.1 experience on a 2.0 delivery infrastructure?

Delivering a multi-channel 5.1 audio experience to end users over an audio delivery architecture designed for stereo can be a major challenge. Surround technologies, like DTS Neural Surround, greatly simplify this task without compromising on quality. Before a traditional stereo transmission, 5.1 assets are down-mixed and typically encoded in AAC. On the player side, rendition can be done directly in stereo, but alternatively, after an up-mix step, in high fidelity 5.1.

What is DPI (Digital Program Insertion)?

As the name suggests, DPI is a feature allowing an equipment to insert content into a live stream. This main application is called ad insertion where commercials clips are inserted on-the-fly between the broadcaster’s headend and the customer’s screen. We talk about regionalized ad insertion when contents may vary between regions.

How does DPI work?

The splicer is responsible for the content substitution and has to be triggered to manage insertion windows. Today, for MPEG transport streams, the SCTE35 standard defines the markers messages that are inserted into the live stream.
Once the splicer detects and decodes a marker (which is actually a splice event), one or more contents provided by a streamer are inserted. The splice event can also handle the global duration of the content insertion.

What is blackout management?

Blackout management is part of DPI in that is consists in modifying live streams for rights issues. Blackout management requires the same architecture and the same triggers.

What are the different captions and subtitles formats?

Subtitles are composed of dialogue transcription or translation, but captions also include non-speech elements descriptions for hearing impaired. Both can be “open”, meaning they are presented to all viewers, typically because they were “burned” in the video, or “closed” when they remain user selectable. Major broadcast subtitling and captioning formats include CEA-608/708 CC, DVB Subtitles and DVB Teletext.

What is the difference between bitmap subtitles and text subtitles?

When they are not directly burned in the video, subtitles or captions can be transmitted alongside the video in text or image form. Text forms allow greater rendering flexibility, whereas image form greater source fidelity. While converting from text to image formats is straightforward, the opposite is not true, especially since images are often used for languages with complex characters and ideograms. CEA-608/708 CC and DVB Teletext are text based, whereas DVB Subtitles and DVD Subtitles are image based.

What are the challenges?

Internet availability due to massive broadband services is regularly getting better. Still Internet remains an unmanaged network as the data packets can travel through various paths. Packet loss, congestion and jitter can occurs and create packet loss at the receiving side. Moreover, the available bandwidth from a location can varies depending on the on-going IP traffic. To overcome these challenges special robustness mechanisms are required.

What contribution application can be done over the internet?

National Live news reports, entertainment ceremonies, on-field sport interviews and music festivals are typically the kind of events where low cost contribution point to point or point to multipoint can apply. As far as an Internet access is possible, broadcasters can now use Internet to solve “the first mile connectivity challenge” and ingest live feeds into their managed network where dedicated fiber is not available. Another application is for the long haul contribution at a very low cost. A remote news office in a foreign country cans now easily provide live feeds to the head quarter office at a very low operating cost.

Why Pro MPEG FEC is not sufficient? How ARQ is helping?

Pro MPEG- FEC ensures some level of quality of service that is generally good enough for managed networks where the bandwidth is predictable. For instance, the Forward Error correction sends additional packets ahead, preventing the loss of some packets. Internet is not as predictable as managed network and a packet retransmission mechanism is the only way to ensure a complete IP packet recovery in case of packet loss. Indeed, if a packet is missing at the receiving side (IRD), the receiver request this particular packet to the transmitting side (encoder).

Why would I need a Network Management System (NMS)?

A NMS can be seen as a cornerstone of a network as it provides at a glance the exact status of the managed network video head end. A NMS can be used for several purposes, depending on operator’s needs and duty’s constraints:

  • The first purpose is obviously to monitor devices. The NMS receives and aggregates information coming from supervised devices and display them in real-time. The collected information are then available from any remotely connected computer with the appropriate access rights. The NMS is responsible for managing redundancy mechanisms. Via automated devices switchover, the NMS ensures services availability.
  • At last, the NMS can be used for services management (creation, update and deletion), Service Level Agreement (SLA) monitoring, automated reporting or even push notification through emails or even SMS.
Can I integrate any kind of device in my NMS?

The answer depends on the NMS manufacturer: either he has developed its own NMS to manage its own range of products, either he is a device-vendor independent.
In the first case, the NMS can integrate external devices to allow wide projects deployment, but it is of course rare to find competitors equipment in the compliancy list.
In the second case, the NMS manufacturer takes more benefit from a wide compliancy list and thus can integrate mostly any devices, whether they are in competition to each other or not.

How communication between devices and NMS is done?

Traditionally, NMS have been used to manage elements reachable from an intranet network. Most of the available internet protocols can be used to exchange information between a NMS and a device: HTTPS, SNMP, SOAP, FTP, SMTP, IMAP, SSH, TELNET, SSL, TCP and much more. On the other side, some devices which does not embed Ethernet connection must be accessed differently: through GPIO or GPIB for instance. This type of equipment is traditionally managed through a gateway that adapts the communication from the original format to an Ethernet-compatible one.

What does NBI and SBI stands for?

NBI and SBI stand for North Bound Interface and South Bound Interface. These acronyms are used when talking about the interconnection between NMS. In a wide network, some sub part can be managed by different NMS. Usually, each main manufacturer brings its own NMS to manage its own park/farm of devices). The whole management system shall be made from a single point of access: all the NMS must be interconnected, and managed by a so-called upper-NMS (the one having the global network overview with a link to all the lower NMSs). NBI is thus the interface between the lower NMS to the upper NMS while the SBI is the interface between the upper NMS to the lower NMS.

Can I manage a device that does not support the protocols mentioned above ?

The answer fully depends on the protocols and interfaces supported by the NMS itself, and the capacity of the NMS manufacturer to include new devices in the supported device list. Basically, any device can be managed by a NMS as, even if an interface/protocol is not supported by the NMS, a gateway can be used, allowing the interface/protocol adaptation.

What are the major OTT delivery formats?

Over-the-top video delivery can still rely on MP4 for Download & Play or Progressive Download, as well as RTMP FLV for Streaming. However, to cope with the bandwidth variations of unmanaged networks, adaptive streaming formats emerged: Apple HTTP Live Streaming, Microsoft Smooth Streaming, Adobe HTTP Dynamic Streaming, and MPEG Dynamic Adaptive Streaming over HTTP. All of them usually includes H.264 video and AAC audio payloads.

What are the major OTT subtitles formats?

Although subtitles burn-in (open caption) is still common for OTT delivery, it requires streams duplication which increases storage and delivery costs. To avoid this, it is possible to transcode broadcast subtitles formats like CEA-608/708 CC or DVB Teletext into broadband formats: DFXP/TTML for SS, HDS and DASH clients and WebVTT for iOS clients (otherwise restricted to CEA-608). Converting images like DVB Subtitles would require OCR, whatSMPTE-TT proposes to avoid by defining an image delivery mode.

How to reduce CDN costs?

Reducing CDN costs can be achieved through several optimizations. First and foremost higher video compression efficiency allows saving on cache and delivery volumes. It even increases end user quality of experience by reducing stalling rate yet preserving video quality! Smart packaging is also strategic: same profiles can for example be referenced in several DASH and SS manifests, and even repackaged on the fly for HLS delivery, which greatly reduces cache volume while improving utilization rate.

Beyond video transcoding, which other performances factors should be considered?

Upload/Download time to/From the cloud are definitely some factors to take in consideration. If connectivity between the premises and the cloud datacenters is limited, content transmission times will be predominant compared to transcoding times and may become a bottle-neck. Conversely, fiber connection’s availability will shorten upload/download times.

Should we expect poorest, similar or better performances in cloud transcoding compared to on-premises transcoding ?

In terms of pure video transcoding, similar performances should be expected. There may be differences in performance (positive or negative) depending on the computing processors used in the cloud datacenters (Intel®, AMD®, or others) and whether (or not) the transcoding software is optimized to run on these technologies.

What do IaaS, PaaS and SaaS mean ?

laaS stands for Infrastructure as a Service. This is a service offering access to a pool of hardware resources. PaaS stands for Platform as a Service. This is a service offering access to a pool of operating system resources. SaaS stands for Software as a Service. This is a service offering access to an end-user software.

What is the difference between Cloud and Virtualization for video processing?

Virtual processing means that the processing is executing in an environment which is not attached to a specific underlying hardware platform, but to a pool of resources. These resources can be present on premise or in a distant location.
Cloud processing refers to the situation where processing is running on underlying resources located in distant datacenters.

What is Dvb S2x ?

DVB-S and DVB-S2 are the most commonly deployed standards used to broadcast content over satellite, either for contribution or distribution purposes.
DVB-S2x stands for the extensions of the DVB-S2 satellite transmissions standard. These standard extensions provide recommendations and directions to improve satellite transmissions efficiency by defining new tools like smaller roll-offs factors, bigger constellations, smaller guard intervals between carrier frequencies and new mod cods.

Do I need Dvb S2x for my application?

This highly depends of the nature of the application. For professional distribution over satellite, there is no doubt that the DVB-S2x standard is offering more options to optimize media transport over satellite. This allows optimizing satellite bouquets with more channels. For professional contribution, the constraints can be a little different as the available bandwidth is not constant and can change over time depending on the demand. Moreover, this is the first step of broadcasting, contribution up-linkers may not necessarily take risks to apply the more optimized modulation but would prefer the most efficient and reliable ones.

What are the advantages of DVB-S2x ?

The purpose of these extensions is ultimately to optimize uplink and downlink transmissions up to 25 % compared to regular DVB-S2 transmissions. By adding, between others, new higher constellations, the modulation scheme can reach a better efficiency and get closer to the Shannon theoretical limit. More bits can be transported within the same bandwidth, leading either to quality gain or density improvements.

What are the constraints?

The more complex the modulation is, the harder it is for the reception part to properly demodulate. A minimum C/N (Carrier-to-noise) margin needs to be maintained in order to ensure a correct reception. This means that the high complexity constellations scheme could not be suitable for all satellite set up. Stronger amplifier and larger antennas would help. IRD with High sensibility level and RF noise processing are designed to overcome these challenges.

Are there the regulations different from one country to another?

Yes, each country may apply different rules for Catchup-TV. These rules affect the way the content availability on Replay TV portals. For instance, in the US, Catch-Up TV must remain identical to the original broadcasted content for a period of 3 days for commercial ratings purposes. This process is known as C3 window. These rules do not apply in European countries that choose other rules for commercial ratings.

What is Catch-up TV ?

Catch-up TV describes the TV service where TV shows are available once the original broadcast is done. Content are usually made available for a limited period of days (usually 7 days). It is also named as Replay TV in some regions or countries.

What are the differences between Catch-up TV and Start-Over TV?

Start-Over TV describes a TV service which offers the capability to viewers to rewind a content while it is still being broadcasted. Catch-Up TV offers to review a content whose broadcast is over.

What are the drivers of Catch-up TV service?

The #1 driver is fast availability. Content must be made available on Catch-up TV portal as soon as possible after the air broadcast. Others drivers are content ubiquity on any screen, editing for monetization and video quality for increasing end-user experience.

Which video codec to choose for contribution?

Broadcast contribution applications require the highest possible video quality as the content may be edited, post-processed and archived. In addition, this latter use-case motivates the use of widely used and standardized technologies that guaranty long term sustainability. For these reasons, AVC/H.264 using High422 Profile is probably the best choice as a video codec. It offers the support of 4:2:2 chroma sampling that was made mandatory in MPEG-2 contribution, and a 10-bit pixel bit-depth that help reducing banding artifacts.

Which video codec to use for delivery?

One of the major objectives of broadcast delivery operators is to maximize the number of video channels given a limited broadcast transmission capacity while reaching the best video quality broadcast level. The best possible tradeoff between compressed bitrate and video quality is currently achieved with AVC/H.264 using High Profile. This video codec is widely supported by a large range of receivers and complete broadcast delivery chains are available from many vendors. This makes it an affordable solution that can be deployed easily.

What is HEVC?

HEVC stands for High Efficiency Video Codec. It is the successor of AVC/H.264 and aims at reducing bitrates by a factor of two keeping the same video quality. It supports all video formats that are compressed using AVC/H.264, such as Standard Definition and High Definition (720p and 1080i). It offers also the ability to compress new Ultra High Definition formats which makes it the best choice for upcoming UHDTV Broadcast applications. HEVC has been promoted as an international standard in 2013 and an ever growing number of suppliers are working on supporting it.

What is AVC-I and what are the advantages if AVC-I against other similar codecs?

AVC-I is a compression scheme using only I-frames compared to AVC/H.264 long GOP which can use I, B and P frames. Tools used to encode AVC-I streams are chosen among those available in the AVC/H.264 High 10 Intra Profile or AVC/H.264 High422 Intra Profile. This means AVC-I streams are fully compliant with the AVC/H.264 standard which guarantees a high level of interoperability even with receivers that were not designed for Intra-only applications. AVC-I offers about 10% bitrate reduction when compared to JPEG-2000 without suffering from interoperability issues.

What bitrate to use?

Bitrate to use for broadcast deliver is highly dependent on the expected quality of experience. As a starting point, here is standard bitrates values used in the industry with H.264/AVC:

  • 1080i : 7Mbps
  • 1080p24: 5Mbps
  • 480i: 1.8Mbps

We see a lot of variations among our customers as some 1080i sports programs are transmitted using only 4Mbps, whereas others use 12Mbps .

What codec for 3D?

Most 3D video delivery rely on a frame packing arrangement that is codec agnostic. The two views are usually packed either side-by-side for 1920×1080 interlaced or progressive source frames, or top-and-bottom for 120×720 progressive frames. This technique avoids any change in the broadcast delivery chain and is widely supported by television sets that render the 3D content. But it is also possible to use codecs specifically designed for 3D compression such as MVC (for Multi-view Video Coding) which is an extension of H.264/AVC, or more complex schemes like the transmission of the tow views using different delivery channel.

8-bit vs 10-bit?

The MPEG-2 codec was able to compress pixels having only 8-bit pixel components. The advent of new technologies such as AVC/H.264 and HEVC has opened the door to a finer quantization scale by allowing 10-bit pixel components. This improved quantization scale offers three distinct benefits:

  1. The codec can match the baseband pixel format used in professional production. This avoids any format conversion prior encoding and after decoding, which is particularly helpful in Contribution applications where the highest possible video quality is requested.
  2. Using a higher bit-depth enables more accurate computations in the encoder which improves compression efficiency. The bitrate gain of 10-bit compression is about 5%.
  3. The human eye is very sensitive at certain colors and a too coarse quantization introduces a visual defect called “banding”. Banding typically occurs on low textured areas such as deep blue sky, sunset etc. Keeping the 10-bit pixel components all along the compression and decompression chain helps to avoid this artifact.
LOADING …