Showing posts with label data offload. Show all posts
Showing posts with label data offload. Show all posts

Thursday, March 15, 2012

Mobile video optimization 2012: executive summary


As I publish my first report (description here), have an exclusive glance with the below summary.


Executive Summary
V
ideo is a global phenomenon in mobile networks. In only 3 years, it has exploded, from a marginal position (less than 10%) to dominating mobile traffic in 2012 with over 50%.
Mobile networks until now, have been designed and deployed predominantly for transactional data. Messaging, email, browsing is fairly low impact and lightweight in term of payload and only necessitated speed compatible with UMTS. Video brings a new element to the equation. Users rarely complained if their text or email arrived late, in fact, they rarely noticed. Video provides an immediate feedback. Consumers demand quality and are increasingly assimilating the network’s quality to the video quality.

With the wide implementation of HSPA (+) and the first LTE deployments, together with availability of new attractive smartphones, tablets and ultra book, it has become clear that today’s networks and price structure are ill-prepared for this new era.
Handset and device vendors have gained much power in the balance and many consumers chose first a device before a provider.

In parallel, the suppliers of content and services are boldly pushing their consumer relationship to bypass traditional delivery media. These Over-The-Top (OTT) players extract more value from consumers than the access and network providers. This trend accelerates and threatens the fabric itself of the business model for delivery of mobile services.

This is the backdrop of the state of mobile video optimization in 2012. Mobile network operators find themselves in a situation where their core network is composed of many complex elements (GGSN, EPC, browsing gateways, proxies, DPI, PCRF…) that are extremely specialized but have been designed with transactional data in mind. The price plans devised to make sure the network is fully utilized are backfiring and many carriers are discontinuing all-you-can-eat data plans and subsidizing adoption of limited, capped, metered models. Radio access is a scarce resource, with many operators battling with their regulators to obtain more spectrum. The current model to purchase capacity, based on purchasing more base stations, densifying the network is finding its limits. Costs for network build up are even expected to exceed data revenues in the coming years.
On the technical front, many operators are hitting the Shannon’s law, the theoretical limit for spectrum efficiency. Diminishing returns are the rule rather than the exception as RAN become denser for the same available spectrum. Noise and interferences increase.
On the financial front, should an operator follow the demand, it would have to double its mobile data capacity on a yearly basis. The projected revenue increase for data services shows only a CAGR of 20% through 2015. How can operators keep running their business profitably? 
Operationally, doubling capacity every year seems impossible for most networks who look at 3 to 5 years roll out plans.
 Solutions exist and start to emerge. Upgrade to HSPA +, LTE, use femto cells or pico cells, change drastically the pricing structure of the video and social services, offload part of the traffic to wifi, implement adaptive bit rate, optimize the radio link, cache, use CDNs, imagine new business models with content providers, device manufacturers and operators… All these solutions and other are examined in this report.
Video optimization has emerged as one of the technologies deployed to solve some of the issues highlighted above. Deployed in over 80 networks globally, it is a market segment that has generated $102m in 2011 and is projected to generate over $260m in 2012. While it is not the unique solution to this issue, {Core Analysis} believe that most network operators will have to deploy video optimization as a weapon in the arsenal to combat the video invasion in their network. 2009 to 2011 saw the first video optimization commercial deployments, mostly as a defensive move, to shore up embattled networks. 2012 sees video optimization as a means to complement and implement monetization strategies, based on usage metering and control, quality of experience measurement and video class of service delivery.

Monday, June 27, 2011

BBTM part 2: Comverse & Continuous Computing

Comverse


Comverse is proposing a full spectrum holistic solution to video optimization, including PCEF, DPI, Optimization, Charging and some aspects of  PCRF.
What caught my attention is their strong push for Gi based optimization vs. Gn. 

They advocate that  measure of congestion at RAN level is inconsistent and inconclusive.
The big push is certainly as well an attempt to ward off the network vendors (ALU, NSN, Ericsson, Huawei, ZTE), by arguing that there is an inherent conflict of interest when these vendors are both trying to sell carriers capacity and optimization at the same time. (My experience of working with all these companies is that 80% of the time, the right hand does not know what the left hand does and that for conflict of interest to exist, it would require a lot better organization and strategy than what I have observed).



Comverse proposes that for effective cell-based congestion detection, a mechanism such as a radius interim messages,  triggered at cell level, not Rnc, would provide an effective way to relay RAN congestion indications to the core.


I agree with the premises but I am not sure of the conclusion. A lot of the congestion at RAN level is also signalling and you could end up in interesting snowball effects with Radius messages (notably inefficient, that is one of the primary reasons for Diameter's invention) could greatly contribute to the congestion they are trying to stave off. 


Now, Diameter repeaters at RAN level... that could help.

Continuous computing
I was curious to hear from CC after their recent acquisition by Radisys in May. They present themselves as an "arm dealer" in the optimization and traffic offload war between vendors and offer some interesting perspectives.




Offload is a cost effective way to manage surges and traffic increases but presents significant challenges in CALEA (Legal interception from Law Enforcement Agencies) and charging and policy. 
Effectively, when traffic is offloaded at RAN level, you need paths to trombone it back to the core network for charging, PCRF and optimization functions if you want to get most of your investment, while satisfying both legal regulations and customer SLA.


The rest of the presentation focused of course on Continuous Computing's solution that collocates DPI, traffic offload (on Lu interface, between Rnc and SGSN) and interacts "seamlessly" with their  video optimization, tromboning traffic back to the core before going to the internet through Gi.


I don't think that the "just another bump in the wire" theory actually works for video, where every millisecond of latency counts against the user experience.