Assuring Video Quality
Over the next 2 years, as we approach a world where processing power is sufficient to produce decent or even high-quality real-time encodes, the differentiator in the professional encoding and transcoding markets will soon shift from one of performance to one of quality.
Or at least that’s one theory. After all, the reasoning goes, if everyone’s encoder is capable of generating content in real time, or even faster than real time, what else but quality will differentiate solutions from one another?
While we’re not yet at a time when all systems perform equally, we are at a transition point at which it makes sense to begin to address quality as a defining factor.
Defining Quality Approaches
In the quality debate, there seem to be two basic approaches: a pragmatic approach and a technical one.
The pragmatic approach often comes down to a decision by content owners and their “golden eye” compressionists, balancing “good enough” against too-lengthy encoding times. Still, the balance doesn’t limit premium content owners from wanting both speed and quality: The Holy Grail is a fast, robust system to encode content in real time (or faster than real time for file-based transcoding) at the highest possible quality.
In other words, a suspension of the laws of physics, as one participant in Transitions’ 2010 inaugural “Best Workflows” study succinctly put it after viewing preliminary results from this set of comparative tests in mid- to late 2010, where we attempted to balance performance, quality, and workflow differentiators in a few of the world’s best encoding and transcoding systems.
The technical approach, on the other hand, seeks to break down quality into numerical values, using a variety of tests that have varying degrees of correlation to human perception. While this quantitative approach does not yet address some of the newer delivery solutions, such as adaptive bitrate (ABR) and peer-to-peer (P2P) delivery, the technical approach is a starting point to baseline content quality.
What exactly are some of the technical elements of this quality debate, and how do we approach quality measurements beyond just what looks good to an individual?
While it would be nice to just list them off in four or five bullet points, it turns out that even the elements themselves are up for debate, depending on whether the discussion is around the quality of the video luminance, the image, or the delivery network.
“For the analysis of decoded video,” wrote Stefan Winkler and Praveen Mohandas in a paper submitted to the IEEE Transmission Broadcasting Journal, “we can distinguish data metrics, which measure the fidelity of the signal without considering its content, and picture metrics, which treat the video data as the visual information that it contains. For compressed video delivery over packet networks, there are also packet- or bitstream-based metrics, which look at the packet header information and the encoded bitstream directly without fully decoding the video. Furthermore, metrics can be classified into full-reference, no-reference, and reduced-reference metrics based on the amount of reference information they require.”
Got that? In other words, depending on which portion of the network you are measuring, there are a number of quality measurements at hand.