I often get asked at events such as Broadband Traffic Management 2012, where I am chairing the mobile video stream this afternoon, "How does video traffic evolves in a LTE network? Won't LTE negate the need for traffic management and video optimization ?".
Jens Schulte-Bockum, CEO of Vodafone Germany shocked the industry last week, indicating that Vodafone Germany traffic in LTE is composed of mobile video for 85%.
I think what most people fail to understand is that video, unlike voice or generic data is elastic. Technologies such as adaptive streaming and source based encoding by content providers means that devices and content providers, given bandwidth will utilize all that is available.
Device manufacturers implement increasingly aggressive versions of video streaming, grabbing as much bandwidth that is available, independently of video encoding, while content providers tend to offer increasing quality if video, moving from 480p to 720p and 1080p and soon 4K.
This was corroborated this morning by Eric Klinker, president and CEO of BitTorrent.
Operators need to understand that video must be managed as an independant service, independently from data and voice as it behaves differently and will "eat up" resources as they are made available.
So the short answer is no, LTE will not solve the issue but rather become a new variable in the equation.
Tuesday, November 6, 2012
Wednesday, October 31, 2012
How to monetize mobile video part II
These posts are excerpts from my article in Mobile Europe from October 2012.
Detect
Measure Experience, not Speed or Size
Analyze, Segment Consumers and Traffic
Control, Manage
Monetize
The Age Of Video: How Mobile Networks Must Evolve
In 3G, mobile network operators find themselves in a situation where their core network is composed of many complex elements (GGSN, EPC, browsing gateways, proxies, DPI, PCRF…) that are extremely specialized but have been designed with transactional data in mind. Radio access is a scarce resource, with many operators battling with their regulators to obtain more spectrum. The current model to purchase capacity, based on purchasing more base stations, densifying the network is finding its limits. Costs for network build up are even expected to exceed data revenues in the coming years.
On the technical front, some operators are hitting the Shannon’s law, the theoretical limit for spectrum efficiency. Diminishing returns are the rule rather than the exception as the RAN (Radio Access Network) becomes denser for the same available spectrum. Noise and interferences increase.
On the financial front, should an operator follow the demand, it would have to double its mobile data capacity on a yearly basis. The projected revenue increase for data services shows only a CAGR of 20% through 2015. How can operators keep running their business profitably?
Operationally, doubling capacity every year seems impossible for most networks who look at 3 to 5 years roll out plans. A change of paradigm is necessary.
Solutions exist and start to emerge. Upgrade to HSPA +, LTE, use smaller cells, changing drastically the pricing structure of the video and social services, network and video optimization, offload part of the traffic to wifi, implement adaptive bit rate, optimize the radio link, cache, use CDNs, imagine new business models with content providers, device manufacturers and operators…
Detect
The main issue is one of network intelligence. Mobile network operators want their network utilization optimized, not minimized. Traffic patterns need to be collected, analyzed, represented so that data and particularly video can be projected, but not at the country, multi-year level as of today. It is necessary to build granular network planning capacity per sector, cell at RAN, Core and Backhaul levels with tools that are video aware. Current DPI and RAN monitoring tools cannot detect video efficiently and analyze it deeply enough to allow for pattern recognition. Additionally, it is necessary to be able to isolate, follow and act on individual video streams on a per subscriber, per service, per property, per CDN level, not simply at the protocol level.
Current mobile network analytics capabilities are mostly inherited from 3G. DPI and traffic management engines rely mostly on protocol analysis and packet categorization to perform their classification and reporting. Unfortunately, in the case of video, this is insufficient. Video takes many forms in mobile networks and is delivered over many protocols (RTSP, RTMP, HTTP, MPEG2TS…). Recognizing these protocols is not enough to be able to perform the necessary next steps. Increasingly, video traffic is delivered over HTTP progressive download. Most current analytics capabilities cannot recognize video as a traffic type today. They rely on url recognition rather than traffic analysis. This leads to issues: how do you differentiate when a user is browsing between YouTube pages from when he is watching a video? How do you discriminate embedded videos in pages? How do you recognize You Tube embedded videos in Facebook? How do you know whether a video is an advertisement or a main programming? How do you know whether a video should be delivered in HD or lower resolution?
It is necessary, in order to categorize and manage video accurately to recognize the video protocol, container, codec, encoding rate, resolution, duration, origin at the minimum to be able to perform pattern recognition.
Measure Experience, not Speed or Size
The next necessary step after identifying and indexing the video traffic is the capacity to grade it from a quality standpoint. As video quality becomes synonymous to network quality in viewers’ mind, mobile network operators must be able to measure and control video quality. Current capabilities in this space are focused on measuring network speed and content size and inferring user satisfaction. This is inadequate
Any hope of monetizing mobile video for mobile network operators beyond byte accounting relies on being able to reliably grade video content in term of quality. This quality measurement is the cornerstone to provide subscribers with the assurance that the content they view is conform to the level of quality they are entitled to. It is also necessary for network operators to establish baseline with content providers and aggregators who view content quality as one of the main elements of pricing.
A uniform Quality of Experience (QoE) measurement standard is necessary for the industry to progress. Today, there is no valid QoE metric for mobile networks, leaving mobile operators relying on sparse proprietary tools, often derived or created for broadcast and professional video, wholly inadequate for mobile networks. Mobile network operators must be able to measure the QoE per video, subscriber, session, sector, cell, origin, CDN if they want to create intelligent charging models.
Analyze, Segment Consumers and Traffic
Mobile network operators have been segmenting efficiently their customer base, building packages, bundles and price plans adapted to their targets. In the era of video, it is not enough.
Once traffic is identified, indexed, recognized, it is important to segment the population and usage. Is video traffic mostly from premium content providers and aggregators or from free user generated sites? Are videos watched mostly long form or short form? Are they watched on tablets or smartphones? Are they very viral and watched many times or are consumers more following the long tail? All these data and many others are necessary to understand the nature of subscribers’ consumption and will dictate the solutions that are most appropriate. This is a crucial step to be able to control the video traffic.
Control, Manage
Once video traffic is correctly identified and indexed, it becomes possible to manage it. It is a controversial topic as net neutrality as a concept is far from being settled, at least in the mobile world. My view is that in a model were scarcity (spectrum, bandwidth) and costs are borne by one player (operators) while revenue and demand are borne by others (content providers and subscribers), net neutrality is impractical and anti-competitive. Unlike in fixed network, where quasi-unlimited capacity and low entry costs allow easy introduction of content and services, mobile networks’ cost structures and business models are managed systems where demand outgrows capacity and therefore negate equal access to resources. For instance, no one is talking about net neutrality in the context of television. I believe that operators will be able to discriminate traffic and offer models based on subscribers and traffic differentiation, many already can. It is just a recognition that today, with current setup, traffic gets degraded naturally as demand grows and DPI and traffic management engine are already providing means to shape and direct traffic to everyone’s best interest. No one could think of networks where P2P file sharing traffic could go unchecked and monopolize the network capacity.
Additionally, all videos are not created equal. There are different definitions, sizes, encoding rates. There are different qualities. Some are produced professionally, with big budgets, some are user generated. Some are live, some are file based. Some are downloaded, some are streamed. Some are premium, some are sponsored, some are freemium, some are free… Videos in their diversity bear the key to monetization.
The diversity of videos and their mode of consumption (some prefer to watch HD content in the highest quality, and will prefer download over streaming, others prefer a video that runs uninterrupted, with small load time even with a lesser quality…) is the key to monetization.
Monetize
Mobile network operators must be able to act based on video and subscribers attribute and influence the users’ experience. Being able to divert traffic to other bearers (LTE, Wifi…), to adjust a video quality on the fly are important steps towards creating class of services, not only amongst subscribers but also between content providers.
It is important as well to enable subscribers to select specific quality levels on the fly and to develop the charging tools to provide instant QoE upgrades.
With the capacity to detect, measure, analyze, segment, control and manage, operators can then monetize video. The steps highlighted here provide means for operators to create sophisticated charging models, whereby subscribers, content providers and aggregators are now included in a virtuous value circle.
Operators should explore creating different quality threshold for the video content that transits through their network. It becomes a means to charge subscribers and / or content providers for premium guaranteed quality.
Monday, October 29, 2012
How to monetize mobile video part I
These posts are excerpts from my article in Mobile Europe from October 2012.
Video is a global phenomenon in mobile
networks. In less than 3 years, it has exploded, from a marginal use case to
dominating over 50% of mobile traffic in 2012.
Mobile networks until 3G, were designed
and deployed predominantly for transactional data. Messaging, email, browsing are
fairly low impact and lightweight in term of payload and only necessitate speeds
compatible with UMTS. Video brings a new element to the equation. Users rarely
complain if their text or email arrives late, in fact, they rarely notice.
Video provides an immediate feedback. Consumers demand quality and are
increasingly assimilating the network’s quality to the video quality.
With the wide implementation of HSPA
(+) and the first LTE deployments, together with availability of new attractive
smartphones, tablets and ultra book, it has become clear that today’s networks
and price structure are ill-prepared to meet these new challenges.
From value chain to value circles:
the operators’ broken business model
One of the main reasons why the current
models are inadequate to monetize video is the unresolved changes in the value
chain. Handset and device vendors have gained much power in the balance lately and
many consumers chose first a device or a brand before a network operator. In
many cases, subscribers will churn from their current operator if they cannot
get access to the latest device. Additionally, device vendors, with the advent
of app stores have become content aggregators and content providers, replacing
the operators’ traditional value added services.
In parallel, the suppliers of content
and services are boldly pushing their consumer relationship to bypass
traditional delivery media. These Over-The-Top (OTT) players extract more value
from consumers than the access and network providers. This trend accelerates
and threatens the fabric itself of the business model for delivery of mobile
services.
Mobile video is already being monetized
by premium content vendors and aggregators, through subscription, bundling and
advertisement. Mobile network operators find themselves excluded from these new
value circles overnight while forced to support the burden of the investment.
In many cases, this situation is a self-inflicted wound.
Operators have competed fiercely to
acquire more subscribers when markets were growing. As mature markets approach
saturation, price differentiation became a strong driver to capture and retain
subscribers. As 3G was being rolled out in the mid
2000’s, the mobile markets were not yet saturated and mobile network operators
business model still revolved around customer acquisition. A favourite tool was
the introduction of all-you-can-eat unlimited data plans to accelerate customer
acquisition and capture through long term captive contracts. As a result, the
customer penetration grew and accelerated with the introduction of smartphones
and tablets by 2007. By 2009. Traffic started to grow exponentially.
Data traffic was growing faster than
expected: AT&T data traffic grew 80x between 2007 and 2010 and is projected
to grow another 10x between 2010 and 2015. Korea Telecom traffic grew 2x in
2010, Softbank (Japan) traffic doubled in 2011, Orange France traffic doubled
in 2010 and doubled again in 2011. In 2012, mature operators are trying to
acquire smartphone users as it is widely believed that the ARPU (Average
Revenue Per User) is much higher (nearly twice) than the one of traditional
feature phone subscribers.
The cost to acquire these subscribers is
important, as many operators end up subsidizing the devices, and having to
significantly increase their network capacity.
At the same time, it appeared that increasingly, consumer data
consumptions was changing and that the “bandwidth hogs”, the top 1% that were
consuming 30 to 40% of the traffic were now consuming about 20%. They were not
consuming less, the average user was consuming a lot more and everyone was
becoming a voracious data user.
The price plans devised to make sure
the network is fully utilized are backfiring and many operators are now discontinuing
all-you-can-eat data plans and subsidizing adoption of limited, capped, metered
models.
While 4G is seen as a means to increase capacity,
it is also a way for many operators to introduce new charging models and to
depart from bundled, unlimited data plans. It is also a chance to redraw the
mobile network, to accommodate what is becoming increasingly a video delivery
network rather than a voice or data network.
Friday, September 28, 2012
How to weather signalling storms
I was struck a few months back when I heard an anecdote from Telecom Italia about a signalling storm in their network, bringing unanticipated outages. After investigation, the operator found out that the launch of Angry bird on Android had a major difference with the iOS version. It was a free app monetized through advertisement. Ads were being requested and served between each levels (or retry).
If you are like me, you can easily go through 10 or more levels (mmmh... retries|) in a minute. Each one of these created a request going to the ad server, which generated queries to the subscriber database, location, charging engine over diameter resulting in +351% diameter traffic.
The traffic generated by one chatty app brought the network to its knees withing days of its launch.
As video traffic congestion becomes more prevalent and we see operators starting to measure subscriber's satisfaction in that area, we have seen several solutions emerge (video optimization, RAN optimization, policy management, HSPA + and LTE upgrades, new pricing models...).
Signalling congestion, by contrast remains an emerging issue. I sat yesterday with Tekelec's Director of Strategic Marketing, Joanne Steinberg to discuss the topic and what should operators do about it.
Tekelec recently (September 2012) released its LTE Diameter Signalling Index. This report projects that diameter traffic will increase at a +252% CAGR until 2016 from 800k to 46 million messages per second globally. This is due to a radical change in applications behavior, as well as the new pricing and business models put in place by operators. Policy management, QoS management, metered charging, 2 sided business models and M2M traffic are some of the culprits highlighted in the report.
Diameter is a protocol that was invented originally to replace SS7 Radius, for the main purposes of Authentication, Authorization and Accounting (AAA). Real time charging and the evolution to IN drove its implementation. The protocol was created to be lighter than Radius, while extensible, with a variety of proprietary fields that could be added for specific uses. Its extensibility was the main criterion for its adoption as the protocol of choice for Policy and Charging functions.
Victim of its success, the protocol is now used in LTE for a variety of tasks ranging from querying subscriber databases (HSS), querying user balance and performing transactional charging and policy traffic.
Tekelec' signaling solutions, together with its policy product line (inherited from the Camiant acquisition), provides a variety of solution to handle the increasing load of diameter signaling traffic and is proposing its "Diameter Signaling Router as a means to manage, throttle, load balance and route diameter traffic".
In my opinion, data browsing is less predictable than voice or messaging traffic when it comes to signalling. While in the past a message at the establishment of the session, one at the end and optionally a few interim updates were sufficient, today sophisticated business models and price plans require a lot of signalling traffic. Additionally, diameter starts to be used to extend outside of the core packet network towards the RAN (for RAN optimization) and towards the internet (for OTT 2 sided business models). OTT content and app providers do not understand the functioning of mobile networks and we cannot expect device and app signalling traffic to self-regulate. While some 3GPP effort is expended to evaluate new architectures and rules such as fast dormancy, the problem is likely to grow faster than the standards' capacity to contain it. I believe that diameter management and planning is necessary for network operators who are departing from all-you-can eat data plans and policy-driven traffic and charging models.
If you are like me, you can easily go through 10 or more levels (mmmh... retries|) in a minute. Each one of these created a request going to the ad server, which generated queries to the subscriber database, location, charging engine over diameter resulting in +351% diameter traffic.
The traffic generated by one chatty app brought the network to its knees withing days of its launch.
As video traffic congestion becomes more prevalent and we see operators starting to measure subscriber's satisfaction in that area, we have seen several solutions emerge (video optimization, RAN optimization, policy management, HSPA + and LTE upgrades, new pricing models...).
Signalling congestion, by contrast remains an emerging issue. I sat yesterday with Tekelec's Director of Strategic Marketing, Joanne Steinberg to discuss the topic and what should operators do about it.
Tekelec recently (September 2012) released its LTE Diameter Signalling Index. This report projects that diameter traffic will increase at a +252% CAGR until 2016 from 800k to 46 million messages per second globally. This is due to a radical change in applications behavior, as well as the new pricing and business models put in place by operators. Policy management, QoS management, metered charging, 2 sided business models and M2M traffic are some of the culprits highlighted in the report.
Diameter is a protocol that was invented originally to replace SS7 Radius, for the main purposes of Authentication, Authorization and Accounting (AAA). Real time charging and the evolution to IN drove its implementation. The protocol was created to be lighter than Radius, while extensible, with a variety of proprietary fields that could be added for specific uses. Its extensibility was the main criterion for its adoption as the protocol of choice for Policy and Charging functions.
Victim of its success, the protocol is now used in LTE for a variety of tasks ranging from querying subscriber databases (HSS), querying user balance and performing transactional charging and policy traffic.
Tekelec' signaling solutions, together with its policy product line (inherited from the Camiant acquisition), provides a variety of solution to handle the increasing load of diameter signaling traffic and is proposing its "Diameter Signaling Router as a means to manage, throttle, load balance and route diameter traffic".
In my opinion, data browsing is less predictable than voice or messaging traffic when it comes to signalling. While in the past a message at the establishment of the session, one at the end and optionally a few interim updates were sufficient, today sophisticated business models and price plans require a lot of signalling traffic. Additionally, diameter starts to be used to extend outside of the core packet network towards the RAN (for RAN optimization) and towards the internet (for OTT 2 sided business models). OTT content and app providers do not understand the functioning of mobile networks and we cannot expect device and app signalling traffic to self-regulate. While some 3GPP effort is expended to evaluate new architectures and rules such as fast dormancy, the problem is likely to grow faster than the standards' capacity to contain it. I believe that diameter management and planning is necessary for network operators who are departing from all-you-can eat data plans and policy-driven traffic and charging models.
Monday, September 10, 2012
IBC roundup: product launches and announcements
As IBC 2012 is about to finish, here is a select list of announcements from vendors in the video encoding space, for those who have not been able to attend or follow al the news.
As you can see, there has been a strong launch platform by all main players, releasing new products, solutions and enhancements. The trend this year was about making multiscreen an economic reality (with lots of features around cost savings, manageability, scalability...), new codec HEVC and 4K TV as well as subjects I have recently brought forward such as edge based packaging and advertising making interesting inroads.
ATEME
ATEME launches new contribution encoder at IBC
Cisco
Cisco's ‘Videoscape Distribution Suite' Revolutionizes Video Content Delivery to Multiple Screens - The Network: Cisco's Technology News Site
Concurrent
Concurrent Showcases Multiscreen Media Intelligence platform at IBC 2012
Elemental Technologies
Elemental Demonstrates Next-Generation Media Processing at IBC Based on Amazon Web Services
Envivio
Envivio Introduces New On-Demand Transcoder That Significantly Enhances Efficiency and Video Quality
Envivio Enables TV Anytime on Any Screen with Enhancements to Halo Network Media Processor
Harmonic
Harmonic and Nagra Team to Power the World’s First Commercial MPEG-DASH OTT Multiscreen Service
RGBNetworks
RGB Networks Offers Complete Solution for Delivery of On-Demand TV Everywhere Services
RGB Networks Expands Multiscreen Delivery and Monetization Solution
SeaWell Networks
SeaWell Networks Announces First MPEG DASH-based Live and On Demand Video Ad-Insertion Solution at IBC 2012
Telestream
Tuesday, July 31, 2012
Allot continues its spending spree
Oversi Networks is a provider of transparent caching solutions for OTT and P2P traffic. Specifically, Oversi has been developing a purpose-built video cache, one of the first of its kind.
Many vendors in the space have caches that have been built on open source general-purpose web caches, originally to manage offline video optimization scenarios (for those not able to transcode mp4, flv/f4v containers in real time). As the long tail of video content unfolds, social media and virality create snowballing effects on some video content and a generic web cache shows limitations when it comes to efficiently cache video.
The benefits of a hierarchical, video specific cache then becomes clear. Since video nowadays come in many formats, containers, across many protocols and since content providers repost the same video with different attributes, titles, URLs, duration...etc, it is quite inefficient to cache video only based on metadata recognition. Some level of media inspection is necessary to ascertain what the video is and whether it really corresponds to the metadata.
All in all, another smart acquisition by Allot. On the paper, it certainly strengthens the company position, with technologies compatible and complementary with their legacy portfolio and the recent Ortiva's acquisition. It will be interesting to see how Allot's product portfolio evolves over time and how the different product lines start to synergize.
Labels:
Allot,
caching,
DPI,
M and A,
video optimization
Location:
Israel
Monday, July 9, 2012
Edge based optimization part II: Edge packaging
As mentioned in my previous post, as video traffic increases across fixed and mobile networks, innovative companies try to find way to reduce the costs and inefficiencies of transporting large amounts of data across geographies.
One of these new techniques is called edge based packaging and relies on adaptive bit rate streaming. It is particularly well adapted for delivery of live and VOD content (not as much for user-generated content).
As we have seen in the past, ABR has many pros and cons, which makes the technology useful in certain conditions. For fixed-line content delivery, ABR is useful to account for network variations and provides an optimum video viewing experience. One of the drawback is the cost of operation of ABR, when a video source must be encoded into 3 formats (Flash, Apple and Microsoft) and many target bit rates to accommodate network conditions.
Edge-based packaging allows a server situated in a CDN's PoP in the edge cache to perform manifest manipulation and bit rate encoding directly at the edge. The server accepts 1 file/stream as input and can generate a manifest, rewrap, transmux and protect before delivery. This method can generate great savings on several dimensions.
Edge based packaging provides all the benefits of broadcast with the flexibility of unicast. It actually transforms a broadcast experience in an individualized, customized, targeted unicast experience. It is the perfect tool to optimize, control and monetize OTT traffic in fixed line networks.
One of these new techniques is called edge based packaging and relies on adaptive bit rate streaming. It is particularly well adapted for delivery of live and VOD content (not as much for user-generated content).
As we have seen in the past, ABR has many pros and cons, which makes the technology useful in certain conditions. For fixed-line content delivery, ABR is useful to account for network variations and provides an optimum video viewing experience. One of the drawback is the cost of operation of ABR, when a video source must be encoded into 3 formats (Flash, Apple and Microsoft) and many target bit rates to accommodate network conditions.
Edge-based packaging allows a server situated in a CDN's PoP in the edge cache to perform manifest manipulation and bit rate encoding directly at the edge. The server accepts 1 file/stream as input and can generate a manifest, rewrap, transmux and protect before delivery. This method can generate great savings on several dimensions.
- Backhaul. The amount of payload necessary to transport video is drastically reduced, as only the highest quality stream / file travels between core and edge and the creation of the multiple formats and bit rates is performed at the PoP.
- Storage. Only 1 version of each file / stream needs to be stored centrally. New versions are generated on the fly, per device type when accessed at the edge.
- CPU. Encoding is now distributed and on-demand, reducing the need for large server farms to encode predictively many versions and formats.
Additionally, this method allows to monetize the video stream:
- Advertising insertion. Ad insertion can occur at the edge, on a per stream / subscriber / regional basis.
- Policy enforcement. The edge server can enforce and decide QoE/QoS class of services per subscriber group or per type of content / channel.
Edge based packaging provides all the benefits of broadcast with the flexibility of unicast. It actually transforms a broadcast experience in an individualized, customized, targeted unicast experience. It is the perfect tool to optimize, control and monetize OTT traffic in fixed line networks.
Friday, July 6, 2012
Edge based optimization part I: RAN traffic optimization
As video traffic grows in mobile and fixed networks alike, innovative companies are looking at optimizing traffic closer to the user. These companies perform loss-less and lossy optimization at the edge of the networks, be it directly in the CDN's PoP or at the RNC in mobile radio networks. We will look today the cellular RAN based optimization and look at Edge optimization in fixed networks in a following post.
As I have indicated in previous posts (here), I believe implementing video lossy optimization in the core network or the backhaul to be very inefficient without a good grasp of what is happening on the user's device or at least in the radio networks. Core network based mobile video optimization vendors infer the state of network congestion by reading and extrapolating the state of the TCP connection. Looking at parameters such as round trip time, packet loss ratio, TCP window, etc... they deduce whether the state of the connection improves or worsens and increase or decrease the rate of optimization. This technique is called Dynamic Bit Rate Adaptation and is one of the most advanced for some of the vendors out there. Others will read the state of the connection at the establishment and will feed and set the encoding rate based on that parameter.
The problem, with these techniques is that they deal with the symptoms of congestion and not the causes. This leads vendors to taking steps in increasing or reducing the encoded bit rate of the video without understanding what the user is actually experiencing in the field. As you well know, there can be a range of issues affecting the state of a TCP connection, ranging from the device's CPU, its antenna reception, the RAN's sector occupancy from a signalling standpoint, whether the user is moving, etc... that are not actually related to a network payload TCP congestion. Core vendors have no way to diagnose these situations and therefore are treating any degradation of signal as a payload congestion, in some cases creating race conditions and snowball effect where the optimization engine actually contributes to the user experience's degradation rather than improve it.
RAN based optimization vendors are deployed in the RAN, at the RNC or even the base station level and perform a real-time analysis of the traffic. Looking at both payload and signalling per sector, cell, aggregation site, RNC, they can offer a great understanding of what the user is experiencing in real time and whether a degradation in TCP connection is the result of payload congestion, signalling issues or cell handover for instance. This precious data is then analysed, presented and made available for corrective action. Some vendors will provide the congestion indications as a diameter integration, with the information travelling from the RAN to the Core to allow resolution and optimization by the PCRF and the video optimization engine. Some vendors will even provide loss-less and lossy techniques at the RAN level to complement the core capabilities. These can range from payload and DNS deep caching, to TCP tuning, pacing, and content shaping...
This is in my mind a great improvement to mobile networks, allowing to break the barrier between RAN and Core and perform holistic optimization along the delivery chain, where it matters most, with the right information to understand the network's condition.
The next step is having the actual capability to have the device report to the network its reading of the network condition, together with the device state and the video experience to provide feedback loop to the network. The vendors that will resolve the equation device state + RAN condition + Policy management + video optimization = better user experience will strike gold and enable operators to truly monetize and improve mobile video delivery.
As I have indicated in previous posts (here), I believe implementing video lossy optimization in the core network or the backhaul to be very inefficient without a good grasp of what is happening on the user's device or at least in the radio networks. Core network based mobile video optimization vendors infer the state of network congestion by reading and extrapolating the state of the TCP connection. Looking at parameters such as round trip time, packet loss ratio, TCP window, etc... they deduce whether the state of the connection improves or worsens and increase or decrease the rate of optimization. This technique is called Dynamic Bit Rate Adaptation and is one of the most advanced for some of the vendors out there. Others will read the state of the connection at the establishment and will feed and set the encoding rate based on that parameter.
The problem, with these techniques is that they deal with the symptoms of congestion and not the causes. This leads vendors to taking steps in increasing or reducing the encoded bit rate of the video without understanding what the user is actually experiencing in the field. As you well know, there can be a range of issues affecting the state of a TCP connection, ranging from the device's CPU, its antenna reception, the RAN's sector occupancy from a signalling standpoint, whether the user is moving, etc... that are not actually related to a network payload TCP congestion. Core vendors have no way to diagnose these situations and therefore are treating any degradation of signal as a payload congestion, in some cases creating race conditions and snowball effect where the optimization engine actually contributes to the user experience's degradation rather than improve it.
RAN based optimization vendors are deployed in the RAN, at the RNC or even the base station level and perform a real-time analysis of the traffic. Looking at both payload and signalling per sector, cell, aggregation site, RNC, they can offer a great understanding of what the user is experiencing in real time and whether a degradation in TCP connection is the result of payload congestion, signalling issues or cell handover for instance. This precious data is then analysed, presented and made available for corrective action. Some vendors will provide the congestion indications as a diameter integration, with the information travelling from the RAN to the Core to allow resolution and optimization by the PCRF and the video optimization engine. Some vendors will even provide loss-less and lossy techniques at the RAN level to complement the core capabilities. These can range from payload and DNS deep caching, to TCP tuning, pacing, and content shaping...
This is in my mind a great improvement to mobile networks, allowing to break the barrier between RAN and Core and perform holistic optimization along the delivery chain, where it matters most, with the right information to understand the network's condition.
The next step is having the actual capability to have the device report to the network its reading of the network condition, together with the device state and the video experience to provide feedback loop to the network. The vendors that will resolve the equation device state + RAN condition + Policy management + video optimization = better user experience will strike gold and enable operators to truly monetize and improve mobile video delivery.
Monday, July 2, 2012
Mobile video optimization 2012 - July update
For those who follow the video optimization market, it will not come as a surprise that my acclaimed report needed already an update after its release in March.The market has been abuzz with rumors and movement, following acquisitions, re-positioning and the changes in market share:
- Bytemobile's acquisition by Citrix
- Ortiva wireless acquisition by Allot
- Openwave's acquisition by Marlin Equity Partners
- Mobile video optimization show 2012 in Brussels
- Flash Network now #2 in market share
The report describes the trends impacting network operators, the technologies involved in video optimization, a review of the vendors and re-sellers in this space, with their differentiators and strategies.
You can find some reviews for the report and my services here and below:
“Patrick is an astute, engaging and articulate individual who has provided my company with valued data, opinion and reports on market status and dynamics in the area of OTT video. Patrick's insights have helped my company recently in developing group strategy and deployment options for video optimization and policy management. ” June 8, 2012
Top qualities: Great Results, Expert, High Integrity
Desmond O'Connor Vice President of Data Design at Deutsche Telekom group
Thursday, June 7, 2012
Citrix takes a Byte out of the Video Optimization market
Bytemobile, the leader of the video optimization segment with over 50% market share has been acquired by Citrix Systems. The companies had announced a strategic partnership in February 2012, where Bytemobile product offering was declared Citrix-ready, a move to enable Bytemobile video optimization to avail of the enhanced scaling and emerging cloud computing deployments in this market segment.
The terms of the acquisition were not disclosed.
After Allot's acquisition of Ortiva and Openwave's exit to Marlin Equity Partners, this is the third exit in that market segment in a short period of time, signalling a strong consolidation trend, as major players such as Cisco, Alcatel Lucent, Huawei, Ericsson, F5, Tellabs and others have started positioning themselves through their own offering, reselling or OEM agreements. Notably, this is possibly the first exit that is not an asset or technology sale.
One week before the video optimization forum, which I will be chairing in Brussels, this announcement promises much activity for network vendors looking at sharpening their video delivery portfolio. More to come on this shortly.
The terms of the acquisition were not disclosed.
After Allot's acquisition of Ortiva and Openwave's exit to Marlin Equity Partners, this is the third exit in that market segment in a short period of time, signalling a strong consolidation trend, as major players such as Cisco, Alcatel Lucent, Huawei, Ericsson, F5, Tellabs and others have started positioning themselves through their own offering, reselling or OEM agreements. Notably, this is possibly the first exit that is not an asset or technology sale.
One week before the video optimization forum, which I will be chairing in Brussels, this announcement promises much activity for network vendors looking at sharpening their video delivery portfolio. More to come on this shortly.
Labels:
Bytemobile,
M and A,
video optimization
Location:
Santa Clara, CA, USA
Wednesday, May 16, 2012
Sprint kills two birds
There is little doubt in my mind, that someone woke up at Sprint one morning and looked at their current position and strategy and thought:
That is until someone must have asked "Who are our suppliers of mobile internet technology who we will be relying on to grow drastically our capacity and services while reducing our costs?".
The answer was probably, "the same vendors whom we have relied on for 2G and 3G, Openwave and Ortiva Wireless"... Well, the market had changed and as the execs looked at the viability of their current suppliers, they probably accelerated their exit by selecting a new vendor. Sprint has been rumored to have selected Bytemobile last month, after a short evaluation.
As you have seen, Ortiva got scooped up by Allot, a good operation for the vendor who has been wanting to expand their offering for the last eighteen months. The company was looking for good technology, at a low price, and that is exactly what they got.
Ortiva Wireless has been one of the first pure play video optimization vendors, focusing on transrating and dynamic bit rate adaptation. A narrow field that allowed it to focus and execute well technically, on a few deployments, but lacked the breadth to challenge vendors with a more complete offering. The company never got the critical mass to grow organically fast enough, and when the news hit, last month, that Sprint, their largest customer was looking at alternative vendors for 4G, the investors, who have put in over $40m in equity and convertible debt decided to look for alternative growth strategy. Allot had been in the market for a while for a video optimization vendor and the deal was concluded in a few weeks, for less than $16m.
The following week, Sandvine announces a joint video optimization deployment with Mobixell at nTelos. Bytemobile had already started communicating (here) around policy-based optimization at mobile world congress, with Openet.
As for Openwave, if you have followed the saga (here), you will not have been surprised to learn that after a few weeks of due diligence with a couple of possible suitors, the company decided to continue licensing its patent portfolio under the name "Unwired Planet" while divesting its product divisions split between Openwave Messaging and Openwave Mobility to Marlin Equity Partners for $55m.It is too bad that the strategic relationship with Juniper did not develop into an acquisition, but it is hardly surprising, considering Openwave's market share and technical results in video optimization.
Meanwhile, as Comviva, NSN, OnMobile, and Huawei enter the segment with their in-house and OEM'd technology, Alcatel Lucent, Amdocs, Cisco and others have selected partners for VAR and OEM and are actively participating in vendors' evaluations.
These subjects and many more at the Mobile Video Optimization forum in Brussels June 12-13th. I am the show's official blogger and will chair day 1. I am looking forward to seeing you there.
- Launched iPhone, check,
- Introduced all-you-can-eat unlimited data plan, check,
- Launch 4G, check,
- ...wow that feels pretty good...
That is until someone must have asked "Who are our suppliers of mobile internet technology who we will be relying on to grow drastically our capacity and services while reducing our costs?".
The answer was probably, "the same vendors whom we have relied on for 2G and 3G, Openwave and Ortiva Wireless"... Well, the market had changed and as the execs looked at the viability of their current suppliers, they probably accelerated their exit by selecting a new vendor. Sprint has been rumored to have selected Bytemobile last month, after a short evaluation.
As you have seen, Ortiva got scooped up by Allot, a good operation for the vendor who has been wanting to expand their offering for the last eighteen months. The company was looking for good technology, at a low price, and that is exactly what they got.
Ortiva Wireless has been one of the first pure play video optimization vendors, focusing on transrating and dynamic bit rate adaptation. A narrow field that allowed it to focus and execute well technically, on a few deployments, but lacked the breadth to challenge vendors with a more complete offering. The company never got the critical mass to grow organically fast enough, and when the news hit, last month, that Sprint, their largest customer was looking at alternative vendors for 4G, the investors, who have put in over $40m in equity and convertible debt decided to look for alternative growth strategy. Allot had been in the market for a while for a video optimization vendor and the deal was concluded in a few weeks, for less than $16m.
The following week, Sandvine announces a joint video optimization deployment with Mobixell at nTelos. Bytemobile had already started communicating (here) around policy-based optimization at mobile world congress, with Openet.
As for Openwave, if you have followed the saga (here), you will not have been surprised to learn that after a few weeks of due diligence with a couple of possible suitors, the company decided to continue licensing its patent portfolio under the name "Unwired Planet" while divesting its product divisions split between Openwave Messaging and Openwave Mobility to Marlin Equity Partners for $55m.It is too bad that the strategic relationship with Juniper did not develop into an acquisition, but it is hardly surprising, considering Openwave's market share and technical results in video optimization.
Meanwhile, as Comviva, NSN, OnMobile, and Huawei enter the segment with their in-house and OEM'd technology, Alcatel Lucent, Amdocs, Cisco and others have selected partners for VAR and OEM and are actively participating in vendors' evaluations.
These subjects and many more at the Mobile Video Optimization forum in Brussels June 12-13th. I am the show's official blogger and will chair day 1. I am looking forward to seeing you there.
Tuesday, May 1, 2012
Allot acquires Ortiva Wireless
You probably by now all know to whom I was referring to in my last post, when I was mentioning rumors of video optimization vendors getting closer to policy vendors. Allot announced this morning the acquisition of Ortiva Wireless for an undisclosed amount.
This is the 4th consolidation in this space in 24months, after Ripcode was acquired by RGBNetworks, Dilithium's assets were sold to OnMobile in 2010 and Openwave products division was acquired by Marlin Equity partners earlier this year. Additionally, in related spaces, Avaya acquired Radvision and RealNetworks licensed its codec to Intel in 2012.
I had first heard that Ortiva was in advanced discussions with Allot on March 31st. At that point, Ortiva having allegedly lost future business with Sprint to Bytemobile was in a dire situation, as far as future revenue prospects where considered. Furthermore, one of its main investors, Intel does not appear on the last two financing bridges filed with the SEC. Allot, who had been rumored to have looked at many vendors in the space over the last 18 months, was the number one contender for a possible acquisition. Neither company wanted to offer comments at that stage, even when last week, the rumor became public in Israel and was commented on Azi Ronen's blog here.
Beyond the purely opportunistic approach of this acquisition, it makes a lot of sense for Allot to have tried and integrate video optimization functions in its portfolio. Bytemobile has strong announced ties with Openet and last week, at the Policy control and real time charging conference 2012, the core of many discussions revolved around how to monetize the tide of OTT video traffic.
I was appalled to hear that, when asked about the best way to price for video, a panel composed of Reliance India, Vodafone Spain and Telefonica Czech, was mostly concerned about congestion and looking at pricing based on time of day. This is a defensive, cost-containment strategy that is sure to backfire. Many vendors who have been selling cost reduction as the main rationale for video optimization have backpedaled in the last few months. As it happens, many operators found out that in peak periods, managing aggressively the size of the feeds to reduce costs is not working. They see that a reduction in 20 to 30% of the size of the individual feeds does not mean less cost, but 20 to 30% more users accessing the same capacity at the same time. Which leads in many cases to no additional revenue since they have not found a way to monetize OTT traffic and no cost reduction, since the network is still not able to meet the demand.
It is of course, one of many possibilities, but what strikes me, is that the industry has not yet agreed on what is the best way to measure video. Capacity (Megabytes), definition (HD or standard), duration, recentness, rights value or speed (Megabit per second) are some of the metrics that can be used for video charging, but in absence of a single accepted metric throughout the industry, many operators are hitting a wall. How is the industry supposed to monetize a traffic that it is not able to measure properly ? How can prices be shared and accepted by all the actors of the value chain if they measure the value of a video differently?
Costs for content owners and aggregators are measured in rights, geographies, storage, version control... Costs for CDNs are measured in geographies, point of presence, capacity... Costs for mobile carriers are measured in capacity, speed, duration, time of day, geography...
This is a conundrum this industry will need to solve. If the mobile network operators want to "monetize" OTT video traffic, they first need to understand what measures can be used across other mobile networks horizontally and vertically with the other players of the value chain. Only then, an intelligent discussion on value and price can be derived. In the meantime, OTT vendors will continue selling (and in most cases giving) video content on mobile networks, increasing costs with no means for a viable business model.
This is the 4th consolidation in this space in 24months, after Ripcode was acquired by RGBNetworks, Dilithium's assets were sold to OnMobile in 2010 and Openwave products division was acquired by Marlin Equity partners earlier this year. Additionally, in related spaces, Avaya acquired Radvision and RealNetworks licensed its codec to Intel in 2012.
I had first heard that Ortiva was in advanced discussions with Allot on March 31st. At that point, Ortiva having allegedly lost future business with Sprint to Bytemobile was in a dire situation, as far as future revenue prospects where considered. Furthermore, one of its main investors, Intel does not appear on the last two financing bridges filed with the SEC. Allot, who had been rumored to have looked at many vendors in the space over the last 18 months, was the number one contender for a possible acquisition. Neither company wanted to offer comments at that stage, even when last week, the rumor became public in Israel and was commented on Azi Ronen's blog here.
Beyond the purely opportunistic approach of this acquisition, it makes a lot of sense for Allot to have tried and integrate video optimization functions in its portfolio. Bytemobile has strong announced ties with Openet and last week, at the Policy control and real time charging conference 2012, the core of many discussions revolved around how to monetize the tide of OTT video traffic.
I was appalled to hear that, when asked about the best way to price for video, a panel composed of Reliance India, Vodafone Spain and Telefonica Czech, was mostly concerned about congestion and looking at pricing based on time of day. This is a defensive, cost-containment strategy that is sure to backfire. Many vendors who have been selling cost reduction as the main rationale for video optimization have backpedaled in the last few months. As it happens, many operators found out that in peak periods, managing aggressively the size of the feeds to reduce costs is not working. They see that a reduction in 20 to 30% of the size of the individual feeds does not mean less cost, but 20 to 30% more users accessing the same capacity at the same time. Which leads in many cases to no additional revenue since they have not found a way to monetize OTT traffic and no cost reduction, since the network is still not able to meet the demand.
It is of course, one of many possibilities, but what strikes me, is that the industry has not yet agreed on what is the best way to measure video. Capacity (Megabytes), definition (HD or standard), duration, recentness, rights value or speed (Megabit per second) are some of the metrics that can be used for video charging, but in absence of a single accepted metric throughout the industry, many operators are hitting a wall. How is the industry supposed to monetize a traffic that it is not able to measure properly ? How can prices be shared and accepted by all the actors of the value chain if they measure the value of a video differently?
Costs for content owners and aggregators are measured in rights, geographies, storage, version control... Costs for CDNs are measured in geographies, point of presence, capacity... Costs for mobile carriers are measured in capacity, speed, duration, time of day, geography...
This is a conundrum this industry will need to solve. If the mobile network operators want to "monetize" OTT video traffic, they first need to understand what measures can be used across other mobile networks horizontally and vertically with the other players of the value chain. Only then, an intelligent discussion on value and price can be derived. In the meantime, OTT vendors will continue selling (and in most cases giving) video content on mobile networks, increasing costs with no means for a viable business model.
Labels:
Allot,
Bytemobile,
CDN,
codecs,
cost containment,
M and A,
mobile broadband,
Monetization,
Ortiva Wireless,
OTT,
video optimization
Location:
Israel
Wednesday, April 11, 2012
Policy driven optimization
The video optimization market is still young, but with over 80 mobile networks deployed globally, I am officially transitioning it from emerging to growth phase in the technology life cycle matrix.
Mobile world congress brought many news in that segment, from new entrants, to networks announcements, technology launches and new partnerships. I think one of the most interesting trend is in the policy and charging management for video.
Operators understand that charging models based on pure data consumption are doomed to be hard to understand for users and to be potentially either extremely inefficient or expensive. In a world where a new iPad can consume a subscriber's data plan in a matter of hours, while the same subscriber could be watching 4 to 8 times the same amount of video on a different device, the one-size-fits-all data plan is a dangerous proposition.
While the tool set to address the issue is essentially in place, with intelligent GGSNs, EPCs, DPIs, PCRFs and video delivery and optimization engine, this collection of devices were mostly managing their portion of traffic in a very disorganized fashion. Access control at the radio and transport layer segregated from protocol and application, accounting separated from authorization and charging...
Policy control is the technology designed to unify them and since this market's inception, has been doing a good job of coordinating access control, accounting, charging, rating and permissions management for voice and data.
What about video?
The diameter Gx interface is extensible, as a semantics to convey traffic observations and decisions between one or several policy decision points and policy enforcement points. The standards allows for complex iterative challenges between end points to ascertain a session's user, its permissions and balance as he uses cellular services.
Video was not a dominant part of the traffic when the policy frameworks were put in place, and not surprisingly, the first generation PCRFs and video optimization deployments were completely independent. Rules had to be provisioned and maintained in separate systems, because the PCRF was not video aware and the video optimization platforms were not policy aware.
This led to many issues, ranging from poor experience (DPI instructed to throttle traffic below the encoding rate of a video), bill shock (ill-informed users blow past their data allowance) to revenue leakage (poorly designed charging models not able to segregate the different HTTP traffic).
The next generation networks see a much tighter integration between policy decision and policy enforcement for the delivery of video in mobile networks. Many vendors in both segments collaborate and have moved past the pure interoperability testing to deployments in commercial networks. Unfortunately, we have not seen many proof points of these integration yet. Mostly, it is due to the fact that this is an emerging area. Operators are still trying to find the right recipe for video charging. Standards do not offer guidance for specific video-related policies. Vendors have to rely on two-ways (proprietary?) implementations.
Lately, we have seen the leaders in policy management and video optimization collaborate much closer to offer solutions in this space. In some cases, as the result of being deployed in the same networks and being "forced" to integrate gracefully, in many cases, because the market enters a new stage of maturation. As you well know, I have been advocating a closer collaboration between DPI, policy management and video optimization for a while (here, here and here for instance). I think these are signs of market maturation that will accelerate concentration in that space. There are more and more rumors of video optimization vendors getting closer to mature policy vendors. It is a logical conclusion for operators to get a better integrated traffic management and charging management ecosystem centered around video going forward. I am looking forward to discussing these topics and more at Policy Control 2012 in Amsterdam, April 24-25.
Mobile world congress brought many news in that segment, from new entrants, to networks announcements, technology launches and new partnerships. I think one of the most interesting trend is in the policy and charging management for video.
Operators understand that charging models based on pure data consumption are doomed to be hard to understand for users and to be potentially either extremely inefficient or expensive. In a world where a new iPad can consume a subscriber's data plan in a matter of hours, while the same subscriber could be watching 4 to 8 times the same amount of video on a different device, the one-size-fits-all data plan is a dangerous proposition.
While the tool set to address the issue is essentially in place, with intelligent GGSNs, EPCs, DPIs, PCRFs and video delivery and optimization engine, this collection of devices were mostly managing their portion of traffic in a very disorganized fashion. Access control at the radio and transport layer segregated from protocol and application, accounting separated from authorization and charging...
Policy control is the technology designed to unify them and since this market's inception, has been doing a good job of coordinating access control, accounting, charging, rating and permissions management for voice and data.
What about video?
The diameter Gx interface is extensible, as a semantics to convey traffic observations and decisions between one or several policy decision points and policy enforcement points. The standards allows for complex iterative challenges between end points to ascertain a session's user, its permissions and balance as he uses cellular services.
Video was not a dominant part of the traffic when the policy frameworks were put in place, and not surprisingly, the first generation PCRFs and video optimization deployments were completely independent. Rules had to be provisioned and maintained in separate systems, because the PCRF was not video aware and the video optimization platforms were not policy aware.
This led to many issues, ranging from poor experience (DPI instructed to throttle traffic below the encoding rate of a video), bill shock (ill-informed users blow past their data allowance) to revenue leakage (poorly designed charging models not able to segregate the different HTTP traffic).
The next generation networks see a much tighter integration between policy decision and policy enforcement for the delivery of video in mobile networks. Many vendors in both segments collaborate and have moved past the pure interoperability testing to deployments in commercial networks. Unfortunately, we have not seen many proof points of these integration yet. Mostly, it is due to the fact that this is an emerging area. Operators are still trying to find the right recipe for video charging. Standards do not offer guidance for specific video-related policies. Vendors have to rely on two-ways (proprietary?) implementations.
Lately, we have seen the leaders in policy management and video optimization collaborate much closer to offer solutions in this space. In some cases, as the result of being deployed in the same networks and being "forced" to integrate gracefully, in many cases, because the market enters a new stage of maturation. As you well know, I have been advocating a closer collaboration between DPI, policy management and video optimization for a while (here, here and here for instance). I think these are signs of market maturation that will accelerate concentration in that space. There are more and more rumors of video optimization vendors getting closer to mature policy vendors. It is a logical conclusion for operators to get a better integrated traffic management and charging management ecosystem centered around video going forward. I am looking forward to discussing these topics and more at Policy Control 2012 in Amsterdam, April 24-25.
Labels:
Authentication,
content based charging,
cost containment,
data cap,
DPI,
interoperability,
mobile broadband,
Monetization,
Openet,
OTT,
PCRF,
Sandvine,
traffic management,
Video delivery
Location:
Amsterdam, The Netherlands
Tuesday, March 20, 2012
Mobile video QOE part II: Objective measurement
Objective measurement of video is performed using mathematical models and algorithm measuring the introduction of noise and the structural similarity of video objects.
There are several mathematical models
such as PSNR (Peak Signal to Noise Ratio) and SSIM (Structural Similarity) for instance, that
are traditionally used for these calculations. The complexity resides in the
fact that a mathematical difference from one pixel to another, from one frame
to another does not necessarily translate equally in the human eye.
PSNR is a measure that has a medium to
low accuracy but is quite economic in computation. It represent possibly up to
10% of the CPU effort necessary to perform a transcoding operation. This means
that although it provides a result that is not fully accurate, the model can be
used to compute calculations as the file is being optimized. A vendor can use
PSNR as a basis to provide a Mean Opinion Score (MOS) on the quality of a video
file.
Video quality of experience measurement can be performed with full
reference (FR), reduced reference (RR) or no reference (NR).
Full Reference
Full reference video measurement means that every pixel of a distorted
video is compared to the original video. It implies that both original and
optimized video have the same number of frames, are encoded in the same format,
with the same aspect ratio, etc… It is utterly impractical in most cases and
requires enormous CPU capacity to process, in many cases more than what is
necessary for the actual transcoding / optimization.
Here is an example of a full reference video quality measurement
method under evaluation and being submitted to ITU-T.
As a full reference approach, the model compares the input or
high-quality reference video and the associated degraded video sequence under test.
Score estimation is based on the following steps:
1) First, the video sequences are
preprocessed. In particular, noise is removed by filtering the frames and the
frames are subsampled.
2) A temporal frame alignment between
reference and processed video sequence is performed.
3) A spatial frame alignment between
processed video frame and the corresponding reference video frame is performed.
4) Local spatial quality features are
computed: a local similarity and a local difference measure, inspired by visual
perception.
5) An analysis of the distribution of the
local similarity and difference feature is performed.
6) A global spatial degradation is
measured using a Blockiness feature.
7) A global temporal degradation is
measured using a Jerkiness feature.
The jerkiness measure is computed by evaluating local and global motion intensity
and frame display time.
8) The quality score is estimated based on
a non-linear aggregation of the above features.
9) To avoid misprediction in case of
relatively large spatial misalignment between reference and processed video
sequence, the above steps are computed for three different horizontal and
vertical spatial alignments of the video sequence, and the maximum predicted
score among all spatial positions is the final estimated quality score.
Reduced reference
Reduced reference video measurement is performing the same evaluation
as in the full reference model but only on a subset of the media. It is not
widely used as frames need to be synchronized and recognized before evaluation.
No reference
No reference video measurement is the most popular method in video
optimization and is used usually when the encoding method is known. The method
relies on the tracking of artefacts in the video, such as blockiness,
jerkiness, blurring, ringing…etc. to derive a score.
Most vendors will create a MOS score from proprietary no reference
video measurement derived from mathematical models. The good vendors constantly
update the mathematical model with comparative subjective measurement to ensure
that the objective MOS score sticks as much as possible to the subjective
testing. You can find out who is performing which type of measurement and their method in my report, here.
Labels:
mobile broadband,
Monetization,
OTT,
QoE,
video optimization
Location:
Montreal, QC, Canada
Thursday, March 15, 2012
Mobile video optimization 2012: executive summary
As I publish my first report (description here), have an exclusive glance with the below summary.
Executive Summary
V
|
ideo is a global phenomenon in mobile networks. In only 3 years, it
has exploded, from a marginal position (less than 10%) to dominating mobile traffic
in 2012 with over 50%.
Mobile networks until now, have been designed and deployed
predominantly for transactional data. Messaging, email, browsing is fairly low
impact and lightweight in term of payload and only necessitated speed
compatible with UMTS. Video brings a new element to the equation. Users rarely
complained if their text or email arrived late, in fact, they rarely noticed. Video
provides an immediate feedback. Consumers demand quality and are increasingly
assimilating the network’s quality to the video quality.
With the wide implementation of HSPA (+) and the first LTE
deployments, together with availability of new attractive smartphones, tablets
and ultra book, it has become clear that today’s networks and price structure
are ill-prepared for this new era.
Handset and device vendors have gained much power in the balance and
many consumers chose first a device before a provider.
In parallel, the suppliers of content and services are boldly pushing
their consumer relationship to bypass traditional delivery media. These Over-The-Top
(OTT) players extract more value from consumers than the access and network
providers. This trend accelerates and threatens the fabric itself of the
business model for delivery of mobile services.
This is the backdrop of the state of
mobile video optimization in 2012. Mobile network operators find themselves in
a situation where their core network is composed of many complex elements
(GGSN, EPC, browsing gateways, proxies, DPI, PCRF…) that are extremely
specialized but have been designed with transactional data in mind. The price
plans devised to make sure the network is fully utilized are backfiring and
many carriers are discontinuing all-you-can-eat data plans and subsidizing
adoption of limited, capped, metered models. Radio access is a scarce resource,
with many operators battling with their regulators to obtain more spectrum. The
current model to purchase capacity, based on purchasing more base stations,
densifying the network is finding its limits. Costs for network build up are
even expected to exceed data revenues in the coming years.
On the technical front, many operators
are hitting the Shannon’s law, the theoretical limit for spectrum efficiency.
Diminishing returns are the rule rather than the exception as RAN become denser
for the same available spectrum. Noise and interferences increase.
On the financial front, should an
operator follow the demand, it would have to double its mobile data capacity on
a yearly basis. The projected revenue increase for data services shows only a
CAGR of 20% through 2015. How can operators keep running their business
profitably?
Operationally, doubling capacity every
year seems impossible for most networks who look at 3 to 5 years roll out
plans.
Video
optimization has emerged as one of the technologies deployed to solve some of
the issues highlighted above. Deployed in over 80 networks globally, it is a
market segment that has generated $102m in 2011 and is projected to generate
over $260m in 2012. While it is not the unique solution to this issue, {Core
Analysis} believe that most network operators will have to deploy video
optimization as a weapon in the arsenal to combat the video invasion in their
network. 2009 to 2011 saw the first video optimization commercial deployments,
mostly as a defensive move, to shore up embattled networks. 2012 sees video
optimization as a means to complement and implement monetization strategies,
based on usage metering and control, quality of experience measurement and
video class of service delivery.
Labels:
adaptive streaming,
bit rate,
caching,
cloud,
content based charging,
cost containment,
data cap,
data offload,
DBRA,
mobile broadband,
video optimization
Location:
montreal
Tuesday, March 6, 2012
GSMAOneAPI: One API to rule them all?
The API is based on xml/SOAP, its version 2, available since June 2011
includes SMS, MMS, Location and Payments as well as Voice Call Control, Data
Connection Profile and Device Capability.
A live pilot implementation is ongoing in Canada with Bell, Rogers and
Telus. It provides the capability for a content provider to enable cross
network features such as messaging, call and data control. It is powered by
Aepona.
The interesting fact about this API is that for the first time, it
exposes some data control indication inherent to the core and RAN networks to
potential external content providers or aggregators.
I went through an interesting demonstration on the GSMAOneAPI stand at
Mobile World Congress 2012 by a small company called StreamOne, out of the
Netherlands.
The company uses the API to retrieve from the operator the bearer the
device is currently connected on. Additional extensions to the API currently
under consideration by GSMA include download speed, upload speed and latency. These
data points, when available to the content providers and aggregators could go a
great way towards making techniques such as Adaptive Bit Rate more mobile
friendly and potentially make way for a real bandwidth negotiation between
network and provider. It might be the beginning of a practical approach to two
sided business models to monetize quality of experience and service of OTT data
traffic. As seen here, ABR is lacking capabilities to provide both operators and content providers with the control they need.
Of course, when looking at the standardization track, these efforts
take years to translate into commercial deployments, but the seed is there and
if network operators deploy it, if content providers use it, we could see a practical
implementation in the next 3-5 years. Whant to know more about practical uses and ABR alternatives, check here.
Labels:
Aepona,
Bell,
GSMA,
Monetization,
OTT,
QoE,
QoS,
Rogers,
Telus,
Video delivery,
video optimization
Location:
Barcelona, Spain
Monday, March 5, 2012
NSN buoyant on its liquid net
I was with Rajeev Suri, CEO of NSN, together with about 150 of my esteemed colleagues from the press and analyst community on February 26 at Barcelona's world trade center drinking NSN's Kool Aid for 2012. As it turns out, the Liquid Net is not hard to swallow.
The first trend highlighted is about big data, big mobile data that is. NSN's prediction is that by 2020, consumers will use 1GB per day on mobile networks.
When confronted with these predictions, network operators have highlighted 5 challenges:
The first trend highlighted is about big data, big mobile data that is. NSN's prediction is that by 2020, consumers will use 1GB per day on mobile networks.
When confronted with these predictions, network operators have highlighted 5 challenges:
- Improve network performances (32%)
- Address decline in revenue (21%)
- Monetize the mobile internet (21%)
- Network evolution (20%)
- Win in new competitive environment (20%)
Don't worry if the total is more than 100%, either it is was a multiple choice questionnaire or NSN's view is that operators are very preoccupied.
Conveniently, these challenges are met with 5 strategies (hopes) that NSN can help with:
- Move to LTE
- Intelligent networks and capacity
- Tiered pricing
- Individual experience
- Operational efficiency
And this is what has been feeding the company in the last year, seeing sales double to 14B euros in 2011 and turning an actual operating profit of 225m euros. The CEO agrees that NSN is not back yet and more divestment and redundancies are planned (8,500 people out of 17,000 will leave) for the company to reach its long term target of 10% operating profit. NSN expects its LTE market share to double in 2012.
Liquid Net
Liquid networks is the moniker chosen by NSN to answer to the general anxiety surrounding data growth and revenue shrinkage. It promises 1000 times more capacity by 2020 (yes, 1000) and the very complex equation to explain the gain is as follow: 10x more cell sites (figures...), 10 times more spectrum and 10 times more efficiency.
The example chosen to illustrate Liquid net, was I think, telling. NSN has deployed its network at an operator in the UK where it famously replaced Ericsson last summer. It has been able since to detect devices and capabilities and adapt video resolutions with Netflix for instance that resulted in 50% less engorgement in some network conditions. That is hard to believe. Netflix being encrypted, I was scratching my head trying to understand how a lossless technique could reach these numbers.
The overall savings claimed for implementing liquid networks were 65% capacity increase, 30% coverage gain and 35% reduction in TCO.
Since video delivery in mobile networks is a bit of a fixation of mine, I decided to dig up more into these extraordinary claims. I have to confess my skepticism at the outset. I am familiar with NSN, having dealt with the company as a vendor for the last 15 years and am more familiar with its glacial pace of innovation in core networks.
I have to say, having gone through a private briefing, presentation and demonstration, I was surprised by the result. I am starting to change my perspective on NSN and so should you. To find out why and how, you will need to read the write up in my report.
Labels:
Ericsson,
LTE,
mass market,
mobile broadband,
Monetization,
Netflix,
NSN,
OTT,
QoE,
QoS,
RAN aware optimization,
Video delivery,
video optimization
Location:
Barcelona, Spain
Subscribe to:
Posts (Atom)