As video traffic grows in mobile and fixed networks alike, innovative companies are looking at optimizing traffic closer to the user. These companies perform loss-less and lossy optimization at the edge of the networks, be it directly in the CDN's PoP or at the RNC in mobile radio networks. We will look today the cellular RAN based optimization and look at Edge optimization in fixed networks in a following post.
As I have indicated in previous posts (here), I believe implementing video lossy optimization in the core network or the backhaul to be very inefficient without a good grasp of what is happening on the user's device or at least in the radio networks. Core network based mobile video optimization vendors infer the state of network congestion by reading and extrapolating the state of the TCP connection. Looking at parameters such as round trip time, packet loss ratio, TCP window, etc... they deduce whether the state of the connection improves or worsens and increase or decrease the rate of optimization. This technique is called Dynamic Bit Rate Adaptation and is one of the most advanced for some of the vendors out there. Others will read the state of the connection at the establishment and will feed and set the encoding rate based on that parameter.
The problem, with these techniques is that they deal with the symptoms of congestion and not the causes. This leads vendors to taking steps in increasing or reducing the encoded bit rate of the video without understanding what the user is actually experiencing in the field. As you well know, there can be a range of issues affecting the state of a TCP connection, ranging from the device's CPU, its antenna reception, the RAN's sector occupancy from a signalling standpoint, whether the user is moving, etc... that are not actually related to a network payload TCP congestion. Core vendors have no way to diagnose these situations and therefore are treating any degradation of signal as a payload congestion, in some cases creating race conditions and snowball effect where the optimization engine actually contributes to the user experience's degradation rather than improve it.
RAN based optimization vendors are deployed in the RAN, at the RNC or even the base station level and perform a real-time analysis of the traffic. Looking at both payload and signalling per sector, cell, aggregation site, RNC, they can offer a great understanding of what the user is experiencing in real time and whether a degradation in TCP connection is the result of payload congestion, signalling issues or cell handover for instance. This precious data is then analysed, presented and made available for corrective action. Some vendors will provide the congestion indications as a diameter integration, with the information travelling from the RAN to the Core to allow resolution and optimization by the PCRF and the video optimization engine. Some vendors will even provide loss-less and lossy techniques at the RAN level to complement the core capabilities. These can range from payload and DNS deep caching, to TCP tuning, pacing, and content shaping...
This is in my mind a great improvement to mobile networks, allowing to break the barrier between RAN and Core and perform holistic optimization along the delivery chain, where it matters most, with the right information to understand the network's condition.
The next step is having the actual capability to have the device report to the network its reading of the network condition, together with the device state and the video experience to provide feedback loop to the network. The vendors that will resolve the equation device state + RAN condition + Policy management + video optimization = better user experience will strike gold and enable operators to truly monetize and improve mobile video delivery.