Friday, January 6, 2012

For or against Adaptive Bit Rate? part II: For ABR

As we have seen here, ABR presents some significant improvements on the way video can be delivered in lossy network conditions.
If we take the fragmented MP4 implementation, we can see that the benefits to a network and content provider are significant. The manifest, transmitted at the establishment of the connection between the player and the server describes the video file, its audio counterpart, its encoding and the different streams and bit rates available.

Since the player has access to all this at the establishment of the connection, it has all the data necessary for an informed decision on the best bit rate to select for the delivery of the video. This is important because ABR is the only technology today that gives the device the control over the selection of the version (and therefore quality and cost) of the video to be delivered.
This is crucial, since there is no efficient means today to convey congestion notification from the Radio Access Network through the Core and Backhaul to the content provider.

Video optimization technology is situated in the Core Network and relies on its reading of the state of the TCP connection (% packet loss, jitter, delay...) to deduce the health of the connection and the cell congestion. The problem, is that a degradation of the TCP connection can have many causes beyond payload congestion. The video optimization server can end up taking decisions to degrade or increase video quality based on insufficient observations or assumptions that might end up contributing to congestion rather than assuage it.

ABR, by providing the device with the capability to decide on the bit rate to be delivered, relies on the device's reading of the connection state, rather than an appliance in the core network. Since the video will be played on the device, this the place where the measurement of the connection state is most accurate.

As illustrated below, as the network conditions fluctuate throughout a connection, the device selects the bit rate that is the most appropriate for the stream, jumping between 300, 500 and 700kbps in this example, to follow network condition.

This provides an efficient means to provide the user with an optimal quality, as network conditions fluctuate, while reducing pressure on congested cells, when the connection degrades.

So, with only 4 to 6% of the traffic, why isn't ABR more widely used and why are network operators implementing video optimization solutions in the core network? Will ABR become the standard for delivering video in lossy networks? These questions and more will be answered in the next post.

No comments: