Transforming & Virtualising Video Delivery

Accueil / Blog / Transforming & Virtualising Video Delivery

In this article published May 24 in Digital Media World, Rémi Beaudouin VP of Marketing at ATEME talks about the challenge of transcoding for modern video delivery.

Virtualisation can give the flexibility broadcasters need to manage the changes in demand. Digital television became a practical proposition around 20 years ago, but at that time the only way to encode and multiplex video involved dedicated hardware. Standard processors simply did not have sufficient power. Since then, content and service providers have moved from digital broadcasting to video delivery onto multiple platforms. Systems architects had started to build headends using hardware-based encoders, and the tendency was to continue in this way. The result for most networks involves a large variety of proprietary hardware appliances, each of which has its own rack requirements, power supplies and so on.

Breaking a Cycle

As standard processors have gained power and become useful components of dedicated encoding appliances, the temptation remains, as existing devices reach their end of life, to simply replace one dedicated, proprietary box with a newer version. ATEME, video compression specialist for bandwidth-sensitive broadcast, cable, DTH, IPTV and OTT applications, believes now is the time to break out of this cycle.

“The solution lies in virtualisation – implementing the functions of an encoder in software, which in turn can run, when necessary, on a virtual machine in a data centre,” said Rémi Beaudouin, VP of Marketing at ATEME. “This is entirely practical now, and it allows us to define systems ourselves, by functionality, rather than by what dedicated boxes do. In turn, that gives us much greater flexibility to define and vary our workflows and processes to reflect consumer demand and commercial opportunities.”

Network Function Virtualisation

Rémi calls this design philosophy network function virtualisation, or NFV. This method of linking software to achieve what a company needs to do at any moment is also now extended to storage, giving complete flexibility to scale operations up and down as required.

He first identified the commercial advantages. “Capital expenditure is reduced because we are simply building, or extending, a server farm based on standardised IT hardware and simple ethernet connectivity,” he said. “We are not building racks of dedicated hardware with complex SDI cabling. The hardware is lower cost, because we can take advantage of the commoditisation of the IT industry. Also, building high levels of redundancy – that is, continuity of operations – means much less investment than duplicating dedicated hardware.

“The view in terms of on going costs also changes. For many applications, software will be licensed by use, so there is a direct link between operating costs and revenues. A virtualised environment will have a much smaller physical footprint than one using dedicated appliances, and consequently power and air conditioning costs will be reduced. Support costs can also be shared across the whole data centre, not just that part of it which has to deal with dedicated broadcast and delivery systems.”

The result is a new elasticity for all operations, which can be accurately costed per service. Proposed new services can be precisely checked for commercial viability, and brought to market very quickly, perhaps gaining an edge over a competitor.

Software-defined networking

While software-defined networking, referring to the separation of functionality from control, is an established technique in other fields of IT, it is still a reatively new idea for broadcast.  For traditional hardware, the way the devices were interconnected defined the way processes were completed, but SDN simply selects the required software packages to complete any task.

Rémi said, “This makes SDN, software-defined networking, and NFV, network function virtualisation’, natural partners. NFV allows you to virtualise the functionality, and SDN creates the control and monitoring layer that issues instructions, prioritises the use of processor cores, and manages storage, meanwhile maintaining high availability and high reliability.

“One of the potential traps is that the management of virtual machines and their processes originally depended on proprietary software, which we would prefer to avoid. OpenStack, an open source virtualisation framework, started to emerge in 2010, allowing specialists to build architectures that meet the practical requirements of their installations. Configuration is done through a web-based dashboard, command-line tools or, most commonly, a RESTful API.”

Lightweight and Fast

While this architecture is often used in the wider networking industry, Rémi pointed out that some companies, including ATEME, are implementing it in the video headend, both for OTT and for traditional broadcast deliveries. Their architecture uses ATEME Management System (AMS), itself an OpenStack system, as the control layer with the virtualisation layer set up by as many Titan devices as required. Titan uses another OpenStack concept, Docker.

Dockers have an edge over virtual machines for a few reasons. They are very lightweight  – typically just a few megabytes – because they do not need to include a full operating system. That means a process can usually be spooled up in less than one second, and they do not impose any CPU penalty. Because they are open source software they are free.

Another basic advantage of using OpenStack principles is that AMS and its functionality are immediately ready for the cloud. Third-party cloud service providers rely heavily on OpenStack, and AMS provides transparent interoperability, for deploying functions, assigning IP addresses and setting up redundancy.

In use: Transcoding a UHD Asset

To demonstrate the efficiency of this virtualised framework, Rémi gave a real-world example – the transcoding of a UHD asset. “If we are talking about a feature length asset, this could take a day to transcode at high quality on a standalone device,” he said. “That means tying up expensive hardware for a very long time, which could impact other workflows. You may also not have a day to wait before you show the content.

“In a virtualised environment ATEME’s management system parses the input file and starts as many encoder instances as it requires or are available within the system business rules. Each instance is responsible for encoding one segment of the content. After the first pass, the system will analyse the result and launch the instances for the second encoding pass to the required quality level and bitrate. Finally, all the encoded segments are aggregated into a single output asset.

“The speed increase will obviously depend on the number of parallel instances that can be launched. Practical experience shows that, for content segmented into one minute chunks, you can get a hundredfold increase in encoding speed, bringing what was a 24-hour plus timescale down to around 15 minutes. Furthermore, some or all of those instances could be in the cloud, giving your architecture the elasticity it needs for workflows with high processor demand.”

The large number of platforms and services people are now working with, new content formats like 360˚ virtual reality and 4K and 8K Ultra HD, and the need to protect archives – all add to the challenge and volume of transcoding. Virtualisation techniques that are already available and affordable can give the flexibility broadcasters and service providers need to manage the changes in demand.

 



Ateme