Showing posts with label Openet. Show all posts
Showing posts with label Openet. Show all posts

Wednesday, April 11, 2012

Policy driven optimization

The video optimization market is still young, but with over 80 mobile networks deployed globally, I am officially transitioning it from emerging to growth phase in the technology life cycle matrix.


Mobile world congress brought many news in that segment, from new entrants, to networks announcements, technology launches and new partnerships. I think one of the most interesting trend is in the policy and charging management for video.


Operators understand that charging models based on pure data consumption are doomed to be hard to understand for users and to be potentially either extremely inefficient or expensive. In a world where a new iPad can consume a subscriber's data plan in a matter of hours, while the same subscriber could be watching 4 to 8 times the same amount of video on a different device, the one-size-fits-all data plan is a dangerous proposition.


While the tool set to address the issue is essentially in place, with intelligent GGSNs, EPCs, DPIs, PCRFs and video delivery and optimization engine, this collection of devices were mostly managing their portion of traffic in a very disorganized fashion. Access control at the radio and transport layer segregated from protocol and application, accounting separated from authorization and charging...
Policy control is the technology designed to unify them and since this market's inception, has been doing a good job of coordinating access control, accounting, charging, rating and permissions management for voice and data.


What about video?
The diameter Gx interface is extensible, as a semantics to convey traffic observations and decisions between one or several policy decision points and policy enforcement points. The standards allows for complex iterative challenges between end points to ascertain a session's user, its permissions and balance as he uses cellular services. 
Video was not a dominant part of the traffic when the policy frameworks were put in place, and not surprisingly, the first generation PCRFs and video optimization deployments were completely independent. Rules had to be provisioned and maintained in separate systems, because the PCRF was not video aware and the video optimization platforms were not policy aware.
This led to many issues, ranging from poor experience (DPI instructed to throttle traffic below the encoding rate of a video), bill shock (ill-informed users blow past their data allowance) to revenue leakage (poorly designed charging models not able to segregate the different HTTP traffic).


The next generation networks see a much tighter integration between policy decision and policy enforcement for the delivery of video in mobile networks. Many vendors in both segments collaborate and have moved past the pure interoperability testing to deployments in commercial networks. Unfortunately, we have not seen many proof points of these integration yet. Mostly, it is due to the fact that this is an emerging area. Operators are still trying to find the right recipe for video charging. Standards do not offer guidance for specific video-related policies. Vendors have to rely on two-ways (proprietary?) implementations.


Lately, we have seen the leaders in policy management  and video optimization collaborate much closer to offer solutions in this space. In some cases, as the result of being deployed in the same networks and being "forced" to integrate gracefully, in many cases, because the market enters a new stage of maturation. As you well know, I have been advocating a closer collaboration between DPI, policy management and video optimization for a while (here, here and here for instance). I think these are signs of market maturation that will accelerate concentration in that space. There are more and more rumors of  video optimization vendors getting closer to mature policy vendors. It is a logical conclusion for operators to get a better integrated traffic management and charging management ecosystem centered around video going forward. I am looking forward to discussing these topics and more at Policy Control 2012 in Amsterdam, April 24-25.

Monday, February 20, 2012

Mobile video QOE part I: Subjective measurement


As video traffic continues to flood many wireless networks, over 80 mobile network operators have turned towards video optimization as a means to reduce the costs associated with growing their capacity for video traffic.
In many cases, the trials and deployments I have been involved in, have shown many carriers at a loss when it comes to comparing one vendor or technology against another. Lately, a few specialized vendors have been offering video QoE (Quality of Experience) tools to measure the quality of the video transmitted over wireless networks. In some cases, the video optimization vendors themselves have as well started to package some measurement with their tool to illustrate the quality of their encoding.
In the next few posts,and in more details, in my report "Video Optimization 2012" I examine the challenges and benefits of measuring  the video QoE in wireless networks, together with the most popular methods and their limitations.
Video QoE subjective measurement
Video quality is a very subjective matter. There is a whole body of science dedicated to provide an objective measure for a subjective quality. The attempt, here, is to rationalize the differences in quality between two videos via a mathematical measurement. It is called objective measurements and will be addressed in my next posts. Subjective measurement on the other hand, is a more reliable means to determine a video’s quality. It is also the most expensive and the most time-consuming technique if performed properly. 
For video optimization, a subjective measurement usually necessitates a focus group who is going to be shown several versions of a video, at different quality (read encoding). The individual opinion of the viewer is recorded in a templatized feedback form and averaged. For this method to work, all users need to see the same videos, in the same sequence, with the same conditions. It means that if the videos are to be streamed on a wireless network, it should be over a controlled environment, so that the same level of QoS is served for the same videos. You can then vary the protocol by having users comparing the original video with a modified version, both played at the same time, on the same device, for instance.
The averaged opinion, the Mean Opinion Score, of each video is then used to rank the different versions. In the case of video optimization, we can imagine an original video encoded at 2Mbps, then 4 versions provided by each vendor at 1Mbps, 750kbps and 500kbps and 250kbps. Each of the subject in the focus group will rank each version from each vendor from 1 to 5, for instance.
The environment must be strictly controlled for the results to be meaningful. The variables must be the same for each vendor, e.g. all performing transcoding in real time or all offline, same network conditions, for all the playback / streams and of course, same devices and same group of users.
You can easily understand that this method can be time consuming and costly, as network equipment and lab time must be reserved, network QoS must be controlled, focus group must be available for the duration, etc...
In that example, the carrier would have each corresponding version from each vendor compared in parallel for the computation of the MOS.  The result could be something like this:
The size of the sample (the number of users in the focus group) and how controlled the environment is, can dramatically affect the result, and it is not rare that you find aberrational results, as in the example above where vendor "a" sees its result increase from version 2 to 3.
If correctly executed, this test can track the relative quality of each vendor at different level of optimization. In this case, you can see that vendor "a" has a high level of perceived quality at medium-high bit rates but performs poorly at lower bit rates. Vendor "b" shows little degradation as the encoding decreases, vendors "c" and "d" show near-linear degradation inversely proportional to the encoding.
In every case, the test must be performed in a controlled environment to be valid. Results will vary sometimes greatly from one vendor to an other, and sometimes with the same vendor at different bit rate, so an expert in video is necessary to create the testing protocol, evaluate the vendors' setup, analyse the results and interpret the scores. As you can see, this is not an easy task and rare are the carriers who have successfully performed subjective analysis with meaningful results for vendor evaluation. This is why, by and large, vendors and carriers have started to look at automatized tools to evaluate existing video quality in a given network,  to compare different vendors and technologies and to measure ongoing perceived quality degradation due to network congestion or destructive video optimization. This will be subject of my next posts.

Thursday, September 15, 2011

Openet's Intelligent Video Management Solution

As you well know, I have been advocating closer collaboration between DPI,   policy management and video optimization for a while (here and here for instance). 


In my mind, most carriers have had to deal in majority with transactional traffic in data until video came along. There are some fundamental differences between managing transactional and flow-based data traffic.The quality of experience of a video service depends as much from the intrinsic quality of the video than the way that video is being delivered.


In a mobile network, with a daisy chain of proxies and gateways (GGSN, DPI, browsing gateway, video optimization engine, caching systems...), the user experience of a streamed video is only going to be as good as the lowest common denominator of that delivery chain.




Gary Rieschick, Director – Wireless and Broadband Solutions at Openet spoke with me today about the Intelligent Video Management Solution launched this week.
"Essentially, as operators are investing in video optimization solutions, they have been asking how to manage video delivery across separate enforcement points. Some vendors are supporting Gx, other are supporting proprietary extensions or proprietary protocols. Some of these vendors have created quality of experience metrics as well, that are used locally, for static rule based video optimization."
Openet has been working with two vendors in the video optimization space to try and harmonize video optimization methods with policy management. For instance, depending on the resulting quality of a video after optimization, the PCRF could decide to zero rate that video if the quality was below a certain threshold.


The main solution features highlighted by Gary are below:
  • Detection of premium content: The PCRF can be aware of agreements between the content provider and operator and provisioned with rules to prioritize or provide better quality to certain content properties.
  • Content prioritization: based on time of day, congestion detection
  • Synchronization of rules across policy enforcement points to ensure for instance that the throttling engine at the DPI level and at the video optimization engine level do not clash.
  • Next hop routing, where the PCRF can instruct the DPI to toute the traffic within the operator network based on what the traffic is (video, mail, P2P...)
  • Dynamic policies to supplement and replace static rules provision in video optimization engine to be reactive to network congestion indications, subscriber profile, etc...


I think it is a good step taken by Openet to take some thought leadership in this space. Operators need help to create a carefully orchestrated delivery chain for video. 
While Openet's solution might work well with a few vendors, i think though, that a real industry effort in standardization is necessary to provide video specific extensions to Gx policy interface.
Delivering and optimizing video in a wireless network results in destructive user experience whenever the control plane enabling feedback on congestion, original video quality, resulting video quality, device and network capabilities is not shared across all policy enforcement and policy decision points.

Wednesday, June 29, 2011

BBTM Part 3: Openet & Cricket

Openet
 
Michael Manzo's presentation focused around the top techniques and trends to watch for in mobile broadband.
#1: Smarter data service tiers
Add QoS payment scheme to speed and volume, allow users to pay for better access and quality or conversely, zero rate traffic at certain times of day when the network is less congested. It is a good idea, but the practical implementation seems complicated. Self care interface for data usage changes, which will trigger PCRF and Charging gateway, which will trigger in turn DPI and PCEF...


#2 service passes
prepay vouchers for data usage. 15$ for 250MB...
 This is in effect in many operators and has proven effective in term of data ARPU growth.


#3 OTT Content partnership
This is the one, I think that has the most mileage. As posted previously, in my opinion, it is inevitable that carriers will have to partner with Hulu, Youtube, iCloud, Netflix, Bittorrent... I cannot imagine  these OTT vendors seating idle while carriers try to monetize this traffic on their networks and to modify the intended user experience in the process. Again, before revenue share is to happen, QOE sharing is necessary.
Carriers and OTT properties need to compromise on what content and services should get what type of QoS in which circumstances to apportion a fair revenue share.


#4 TV everywhere
Premium content monetization for tablets. VOD everywhere sounds a sexy business model, when mobile broadband allows to share content across TV, Set top boxes, Blu ray players, video game consoles, tablets, laptops and smartphones. As previously discussed, it is a business model that has some way to go to be palatable for mass market.

#5 Fixed Mobile Convergence Parental control

Voice, text and browsing parental control. While I understand the value proposition and in some cases the regulatory constraints, this is the trend I am the most skeptical about. In my experience, trying to enforce parental control on content usage, is a bit like content-based charging. An interesting concept that is too complex to implement with today's technology. I don't think it is technically or logistically viable to maintain white / blacklist of URLs or domains to regulate your child's access to the internet.  The web properties change too fast and your average teenager knows enough about anonimizing and redirecting browsers and apps to circumvent any network based attempt toregulate that usage.



#6 Smarter could service
Expand policy, charging, optimization to the cloud. This will be the subject of a future post.



Cricket Leap
Leap is an interesting carrier, with a very innovative, disruptive positioning that allowed them to garner a very different customer base from the rest of US carriers:
Leap's customer base is in majority young (55% less than 35 year old), cost conscious (medium yearly income is less than $50k), from ethnic descent (60%), and use Cricket phone as their primary phone (95%!).

The key to attracting this demographics has historically been to offer prepaid contracts, with unlimited usage. The result is quite interesting with 5.8m subscribers, generating 2.8B$ revenue.

Their tiered offering using speeds and throttling for mobile broadband has been a staggering success. Today, mobile broadband revenue covers the entire CAPEX and OPEX of the network. Which means voice and text revenues are pure margin...

Going forward, Leap will differentiate further, using QoS and QoE levers to create an even more segmented pricing strategy to mobile broadband users.


On the question of traffic optimization, it is interesting to note that Leap commented "Anytime we implemented optimization techniques in our network, we did not see any negative impact on customer traffic".


Friday, June 24, 2011

BBTM: The value circle and content based charging

Broadband traffic management North America took place in Boston, on June 21, 22.
Here are my notes from the event and highlights from the conference.




This is the first year that Informa holds this event in North America. After the great success of the UK edition last November, they decided to create regional offspring of the show in Middle East, North America and Asia, with the global edition still planned in the UK this November.


The show, like in the UK, featured most of the subjects that are relevant in Mobile Broadband:


  • Data offload,
  • Policy management and charging,
  • Video optimization,
  • Femtocells,
  • Traffic optimization...


The attendance and presentations were mostly vendors, with only a few carriers represented (AT&T, Verizon, Cricket, Telecom Italy). No doubt the ratio will change in the future as the show takes on a more visible role in North America.


Here are a few of the highlights from my perspective:


Jeff Eisenach - Navigant Economics
Jeff is a veteran of the US regulatory forums. He had an interesting presentation about the change from value chain to value circle in wireless. Forcing carriers to reconsider old notions such as "owning the customer".



















I quite like the concept. Now customers actually have relationships with content owners, aggregators, phone manufacturers and carriers... No one owns the customer, they are shared and in my mind it will force more concessions from carriers in the future beyond the revenue shares that we have seen between the likes of AT&T and Apple for iPhone.


My panel:
How Can Carriers Move Away From Unlimited Data Plans While Keeping Customers Happy?
with:

  • Mike Coward, Co-Founder & CTO, Continuous Computing
  • Fred Kemmerer, CTO, Genband
  • Chris Hoover, VP Product Marketing and Management, Openet
There are still a lot of talks about content-based charging from PCRF vendors. It seems to be the projected cure for all that ails mobile networks. Too much OTT in your network? Content-based charging is the solution. Too much P2P traffic? Content-based charging is the solution....


 When I asked how you reconcile content, transport and application offering the following example, it was clear that there is still a lot of work to do by charging and policy vendors to enable true content-based charging.


On my iPhone, I can watch video from the browser or from an app. Some apps, before serving you content ask your permission to push notifications or to use your location. If an app invokes a content and invokes gps or other network based service, how does the network operator is to understand that this video is YouTube, from the app, not the browser, invoking gps for location targeting and charging for the overall service?
Today, inevitably, the user gets charged for data transport of the video and the GPS call. Even if an operator offers all-you-can-eat YouTube, you still get charged for the GPS call, right? Because the network is not intelligent enough to make the contextual difference between me using my map app and invoking GPS or a third party app invoking GPS.

As more and more discussions about content based charging ensued, I offer that an operator attempting to derive revenue from OTT service will inevitably need to get closer to content and app providers.

Content providers will demand revenue share on revenues generated from their services. Otherwise, you will see content providers encrypting the traffic in an attempt to keep control of the user experience and to deny the operator the capability to throttle or mangle the content delivery.


Next generation DPI, traffic management, pcrf and proxies are necessary.
I believe that operators will have to extend quality of service (QOS) and quality of experience (QOE) control to the content provider, beyond the operators' walled garden, if they want an efficient revenue share or load share model.