Wednesday, June 29, 2011

BBTM Part 3: Openet & Cricket

Openet
 
Michael Manzo's presentation focused around the top techniques and trends to watch for in mobile broadband.
#1: Smarter data service tiers
Add QoS payment scheme to speed and volume, allow users to pay for better access and quality or conversely, zero rate traffic at certain times of day when the network is less congested. It is a good idea, but the practical implementation seems complicated. Self care interface for data usage changes, which will trigger PCRF and Charging gateway, which will trigger in turn DPI and PCEF...


#2 service passes
prepay vouchers for data usage. 15$ for 250MB...
 This is in effect in many operators and has proven effective in term of data ARPU growth.


#3 OTT Content partnership
This is the one, I think that has the most mileage. As posted previously, in my opinion, it is inevitable that carriers will have to partner with Hulu, Youtube, iCloud, Netflix, Bittorrent... I cannot imagine  these OTT vendors seating idle while carriers try to monetize this traffic on their networks and to modify the intended user experience in the process. Again, before revenue share is to happen, QOE sharing is necessary.
Carriers and OTT properties need to compromise on what content and services should get what type of QoS in which circumstances to apportion a fair revenue share.


#4 TV everywhere
Premium content monetization for tablets. VOD everywhere sounds a sexy business model, when mobile broadband allows to share content across TV, Set top boxes, Blu ray players, video game consoles, tablets, laptops and smartphones. As previously discussed, it is a business model that has some way to go to be palatable for mass market.

#5 Fixed Mobile Convergence Parental control

Voice, text and browsing parental control. While I understand the value proposition and in some cases the regulatory constraints, this is the trend I am the most skeptical about. In my experience, trying to enforce parental control on content usage, is a bit like content-based charging. An interesting concept that is too complex to implement with today's technology. I don't think it is technically or logistically viable to maintain white / blacklist of URLs or domains to regulate your child's access to the internet.  The web properties change too fast and your average teenager knows enough about anonimizing and redirecting browsers and apps to circumvent any network based attempt toregulate that usage.



#6 Smarter could service
Expand policy, charging, optimization to the cloud. This will be the subject of a future post.



Cricket Leap
Leap is an interesting carrier, with a very innovative, disruptive positioning that allowed them to garner a very different customer base from the rest of US carriers:
Leap's customer base is in majority young (55% less than 35 year old), cost conscious (medium yearly income is less than $50k), from ethnic descent (60%), and use Cricket phone as their primary phone (95%!).

The key to attracting this demographics has historically been to offer prepaid contracts, with unlimited usage. The result is quite interesting with 5.8m subscribers, generating 2.8B$ revenue.

Their tiered offering using speeds and throttling for mobile broadband has been a staggering success. Today, mobile broadband revenue covers the entire CAPEX and OPEX of the network. Which means voice and text revenues are pure margin...

Going forward, Leap will differentiate further, using QoS and QoE levers to create an even more segmented pricing strategy to mobile broadband users.


On the question of traffic optimization, it is interesting to note that Leap commented "Anytime we implemented optimization techniques in our network, we did not see any negative impact on customer traffic".


Monday, June 27, 2011

BBTM part 2: Comverse & Continuous Computing

Comverse


Comverse is proposing a full spectrum holistic solution to video optimization, including PCEF, DPI, Optimization, Charging and some aspects of  PCRF.
What caught my attention is their strong push for Gi based optimization vs. Gn. 

They advocate that  measure of congestion at RAN level is inconsistent and inconclusive.
The big push is certainly as well an attempt to ward off the network vendors (ALU, NSN, Ericsson, Huawei, ZTE), by arguing that there is an inherent conflict of interest when these vendors are both trying to sell carriers capacity and optimization at the same time. (My experience of working with all these companies is that 80% of the time, the right hand does not know what the left hand does and that for conflict of interest to exist, it would require a lot better organization and strategy than what I have observed).



Comverse proposes that for effective cell-based congestion detection, a mechanism such as a radius interim messages,  triggered at cell level, not Rnc, would provide an effective way to relay RAN congestion indications to the core.


I agree with the premises but I am not sure of the conclusion. A lot of the congestion at RAN level is also signalling and you could end up in interesting snowball effects with Radius messages (notably inefficient, that is one of the primary reasons for Diameter's invention) could greatly contribute to the congestion they are trying to stave off. 


Now, Diameter repeaters at RAN level... that could help.

Continuous computing
I was curious to hear from CC after their recent acquisition by Radisys in May. They present themselves as an "arm dealer" in the optimization and traffic offload war between vendors and offer some interesting perspectives.




Offload is a cost effective way to manage surges and traffic increases but presents significant challenges in CALEA (Legal interception from Law Enforcement Agencies) and charging and policy. 
Effectively, when traffic is offloaded at RAN level, you need paths to trombone it back to the core network for charging, PCRF and optimization functions if you want to get most of your investment, while satisfying both legal regulations and customer SLA.


The rest of the presentation focused of course on Continuous Computing's solution that collocates DPI, traffic offload (on Lu interface, between Rnc and SGSN) and interacts "seamlessly" with their  video optimization, tromboning traffic back to the core before going to the internet through Gi.


I don't think that the "just another bump in the wire" theory actually works for video, where every millisecond of latency counts against the user experience.






Friday, June 24, 2011

BBTM: The value circle and content based charging

Broadband traffic management North America took place in Boston, on June 21, 22.
Here are my notes from the event and highlights from the conference.




This is the first year that Informa holds this event in North America. After the great success of the UK edition last November, they decided to create regional offspring of the show in Middle East, North America and Asia, with the global edition still planned in the UK this November.


The show, like in the UK, featured most of the subjects that are relevant in Mobile Broadband:


  • Data offload,
  • Policy management and charging,
  • Video optimization,
  • Femtocells,
  • Traffic optimization...


The attendance and presentations were mostly vendors, with only a few carriers represented (AT&T, Verizon, Cricket, Telecom Italy). No doubt the ratio will change in the future as the show takes on a more visible role in North America.


Here are a few of the highlights from my perspective:


Jeff Eisenach - Navigant Economics
Jeff is a veteran of the US regulatory forums. He had an interesting presentation about the change from value chain to value circle in wireless. Forcing carriers to reconsider old notions such as "owning the customer".



















I quite like the concept. Now customers actually have relationships with content owners, aggregators, phone manufacturers and carriers... No one owns the customer, they are shared and in my mind it will force more concessions from carriers in the future beyond the revenue shares that we have seen between the likes of AT&T and Apple for iPhone.


My panel:
How Can Carriers Move Away From Unlimited Data Plans While Keeping Customers Happy?
with:

  • Mike Coward, Co-Founder & CTO, Continuous Computing
  • Fred Kemmerer, CTO, Genband
  • Chris Hoover, VP Product Marketing and Management, Openet
There are still a lot of talks about content-based charging from PCRF vendors. It seems to be the projected cure for all that ails mobile networks. Too much OTT in your network? Content-based charging is the solution. Too much P2P traffic? Content-based charging is the solution....


 When I asked how you reconcile content, transport and application offering the following example, it was clear that there is still a lot of work to do by charging and policy vendors to enable true content-based charging.


On my iPhone, I can watch video from the browser or from an app. Some apps, before serving you content ask your permission to push notifications or to use your location. If an app invokes a content and invokes gps or other network based service, how does the network operator is to understand that this video is YouTube, from the app, not the browser, invoking gps for location targeting and charging for the overall service?
Today, inevitably, the user gets charged for data transport of the video and the GPS call. Even if an operator offers all-you-can-eat YouTube, you still get charged for the GPS call, right? Because the network is not intelligent enough to make the contextual difference between me using my map app and invoking GPS or a third party app invoking GPS.

As more and more discussions about content based charging ensued, I offer that an operator attempting to derive revenue from OTT service will inevitably need to get closer to content and app providers.

Content providers will demand revenue share on revenues generated from their services. Otherwise, you will see content providers encrypting the traffic in an attempt to keep control of the user experience and to deny the operator the capability to throttle or mangle the content delivery.


Next generation DPI, traffic management, pcrf and proxies are necessary.
I believe that operators will have to extend quality of service (QOS) and quality of experience (QOE) control to the content provider, beyond the operators' walled garden, if they want an efficient revenue share or load share model.



Friday, June 17, 2011

Bridgewater Systems to be acquired by Amdocs for $211M



"Becoming part of Amdocs would enable us to accelerate our corporate growth strategy, centered around global expansion, enabling the transformation to next generation converged networks, portfolio and solution innovation, and leveraging our installed base," said Ed Ogonek, President and CEO, Bridgewater.


As mobile traffic continues to increase and video becomes an increasing part of it, it will be necessary to have a tight intelligent traffic management entity. I see collapse of charging, DPI, PCRF, Routing, Browsing and optimization accelerating. Remember, AMDOCS acquired streamezzo last year to tighten their mobile video story.

In my mind, carriers tolerate today having GGSN, PCEF, proxies, browsing gateways, DPI, web optimization, video optimization engines only because the market is very atomized. The skill set is very dispersed and no vendor today has an intelligent end to end solution to manage traffic from backbone to RAN through core.
As discussed previously, the market is not mature enough for the best of breed approach, full spectrum vendors will step up.

As video traffic increases, it will become evident that having a daisy chain of proxies is inefficient, costly, hardly scalable and complex to manage. I don't see policy management becoming so ubiquitous and intuitive that rules can be instantiated in one point and flow harmoniously to all the elements without impacting the user experience.
The user experience, in video is only as good as the lowest performing element in the delivery chain.

Inevitably, we will see more concentration in that space in the near future.

Thursday, June 16, 2011

Cloudlet, CDN and content acceleration

As indicated in a previous post, there is much that could be gained from examining in more details how mobile networks could benefit from performing sophisticated content manipulation in the cloud, rather than in core network or in the device.


Yesterday, Citrix Systems and Juniper Networks agreed and invested in Cotendo, who has announced a $17M round of financing. AT&T is already a partner and the company focuses on the enterprise and media segments. Cotendo's massive network allows for global presence, while its technology is focusing on accelerating the mobile web experience.
Additionally, its cloud based implementation provides an alternative to massive CDN strategy which requires point of presence increase, with traffic and geographic expansion, a model that is scalable but economically difficult.

Most of the company positioning is about web generally. Video seems to be missing for the moment.
It will be interesting to see how Cotendo evolves into the mobile realm, disrupting carrier and CDN strategies in the future.

Tuesday, June 7, 2011

Google vs. rest of the world: WebM vs H264

After the acquisition of ON2 technologies, as anticipated, Google has open-sourced its video codec WebM (video format VP8 and audio format Ogg Vorbis encapsulated in Matroska container) as an attempt to counter-act H.264.

The issue:
In  the large-scale war between giants Apple, Adobe, Microsoft and Google, video has become the latest battlefield. As PCs and mobile devices consume more and more video, the four companies battle to capture content owners and device manufacturers mind share, in order to ensure user experience market dominance.
Since video formats and protocols are fairly well standardized, the main area for differentiation remains codecs.
Codecs are left to anyone's implementation choice. The issue can be thorny, though, as most codecs used to decode / encode specific formats require payments of license to intellectual property owners.
For instance, the H.264 format is managed by MPEG LA, who has assembled a pool of patents associated with the format, from diverse third parties and is licensing its usage, collecting and redistributing royalties on behalf of the patent owners. H.264 is a format used for transmission of videos in variable bandwidth environment and has been adopted by most handset manufacturers, Microsoft, Apple and Adobe as the de-facto format.

If you are Google, though, the idea of paying license to third parties, that are in most case direct competitors for something as fundamental as video is a problem.

The move:
As a result, Google has announced that they are converting all of Youtube most watched videos to WebM and that the format becomes the preferred one for all Google properties (Youtube, Chrome...).
The purpose here, is for Google to avoid paying royalties to MPEG LA, while controlling user experience by trying to integrate vertically the content owners, browser and device manufacturers codec usage.

It does not mean that Google will stop supporting other formats (flash, H.264...) but the writing is on the wall, if they can garner enough support.

The result:
It is arguable whether WebM can actually circumvent MPEG LA H.264 royalty claims. There are already investigations ongoing as to whether VP8 is not infringing any of H.264 intellectual property. Conversely, the U.S. Department of Justice is investigating whether MPEG LA practices are stifling competition.

In the meantime, content owners, device manufacturers, browser vendors have to contend with one new format and codec, increasing the fragmentation in this space and reducing interoperability.