Simultaneous Multiformat Encoding
When Real-Time Encoding Isn’t Possible
Hybrid hardware-software solutions do not have predetermined limits on the number of concurrent encodes. Yet, given the almost limitless combinations of codec, container format, resolution, bit-rate, and other advanced encoding parameters—as well as the unlimited number of deliverables a user may wish to encode—it’s possible to exceed the real-time capabilities of even the most robust hardware or software encoder.
For CPU-based systems, processing power and memory management impose practical limitations. Regardless of what hardware is applied to the encoding, more recent codecs and high-resolution source material will also tax the limits of a real-time encoder.
Advanced H.264 HD encoding, for example, is more computationally intensive than basic MPEG-2 HD encoding, so a greater number of simultaneous outputs can be created alongside a full-resolution MPEG-2 encode than with an equivalent full-resolution H.264 encode. When the desired combination of output settings and quantity of simultaneous encodes can’t be achieved in real time, robust workflow features of the encoder can still reduce the operational complexity and manual effort required for multiplatform encoding. Using minimal CPU processing power, some solutions can ingest an uncompressed or lossless intermediate file format that is then automatically transcoded to the desired output formats.
Another way to increase throughput for multiple simultaneous transcodes is to increase the overall transcoding system’s capability. When a single-core or multicore processor on a single transcoding machine is not sufficient, scalable enterprise-class models allow transcoding tasks to be distributed across multiple systems.
In these solutions, the mechanisms used to distribute jobs between systems are important, especially when disparate transcoding systems will be used as part of a master transcoding solution. For instance, while a simple "next job to next available node" paradigm can be adequate when all of the nodes are based on identical CPU processing technologies, newer CPU-based systems will often feature improved multicore processor capabilities, special CPU functions that are optimized for encoding, and faster speeds to the degree that a newer system should be used to handle the most complex transcoding output scenarios.
The ideal transcoding solution, then, tracks the performance of each node for each type of transcode, "learning" which systems are best for each type of task and distributing jobs accordingly for maximum overall throughput. The greater the number of systems added later and the greater their performance potential over existing systems, the greater the potential benefit that such "intelligent" scalability can deliver.
Tricks and Tips
To reach the broadest audience possible across varying connection speeds and operating systems (especially if syndicating your content across multiple platforms), consider offering your web-based content in multiple resolutions and bitrates, whether as separate offerings or packaged together for adaptive bitrate delivery. While this requires additional transcoding resources, the ability to send a version of your content to a mobile phone or limited web connection at a slightly lower bitrate or resolution may be the only practical solution to an otherwise unacceptable viewing experience.
For live content that is being streamed—or recorded for later streaming—consider creating a full-resolution, high-bitrate archive copy of the content, in addition to the lower-bitrate streaming files. The archive copy, sometimes referred to as a "mezzanine" file, allows for postevent transcoding using more complex encoding parameters than may have been achievable in real time during the live stream and taking advantage of the ability to look both forward and back in the content during compression. Within a short time after the live stream, a replacement copy can be added to the CDN that often uses less bandwidth and looks even better than the live stream. This sounds counterintuitive, especially for broadcasters who have, in the past, used the live broadcast as the benchmark. But such non-real-time processes may yield a higher-quality transcoded file for on-demand viewing.
In addition, the archival copy of the live stream future-proofs transcoding of content for additional delivery platforms that may be used if content becomes popular, as well as for emerging codecs and formats.
Conclusion
This article has just scratched the surface of multiformat, multiplatform encoding, a key step in the content value chain for almost every professional content workflow.
Whether encoding from live or tape-based sources, encoding systems vary greatly in their ability to create multiple output formats. The most versatile systems concurrently create multiple output files in real time, including groupings of files with varying formats, codecs, bitrates and resolutions. The ability to create these simultaneous outputs not only boosts efficiency when targeting multiple delivery platforms, but it also enables higher-quality future usage beyond the immediate application.