Showing posts with label Video delivery. Show all posts
Showing posts with label Video delivery. Show all posts

Thursday, May 2, 2024

How to manage mobile video with Open RAN

Ever since the launch of 4G, video has been a thorny issue to manage for network operators. Most of them had rolled out unlimited or generous data plans, without understanding how video would affect their networks and economics. Most videos streamed to your phones use a technology called Adaptive Bit Rate (ABR), which is supposed to adapt the video’s definition (think SD, HD, 4K…) to the network conditions and your phone’s capabilities. While this implementation was supposed to provide more control in the way videos were streamed on the networks, in many cases it had a reverse effect.

 

The multiplication of streaming video services has led to ferocious competition on the commercial and technological front. While streaming services visibly compete on their pricing and content attractiveness, a more insidious technological battle has also taken place. The best way to describe it is to compare video to a gas. Video will take up as much capacity in the network as is available.

When you start a streaming app on your phone, it will assess the available bandwidth and try to deliver the highest definition video available. Smartphone vendors and streaming providers try to provide the best experience to their users, which in most cases means getting the highest bitrate available. When several users in the same cell try to stream video, they are all competing for the available bandwidth, which leads in many cases to a suboptimal experience, as some users monopolize most of the capacity while others are left with crumbs.

 

In recent years, technologies have emerged to mitigate this issue. Network slicing, for instance, when fully implemented could see dedicated slices for video streaming, which would theoretically guarantee that video streaming does not adversely impact other traffic (video conferencing, web browsing, etc…). However, it will not resolve the competition between streaming services in the same cell.

 

Open RAN offers another tool for efficiently resolving these issues. The RIC (RAN Intelligent Controller) provides for the first time the capability to visualize in near real time a cell’s congestion and to apply optimization techniques with a great level of granularity. Until Open RAN, the means of visualizing network congestion were limited in a multi-vendor environment and the means to alleviate them were broad and coarse. The RIC allows to create policies at the cell level, on a per connection basis. Algorithms allow traffic type inference and policies can be enacted to adapt the allocated bandwidth based on a variety of parameters such as signal strength, traffic type, congestion level, power consumption targets…

 

For instance, an operator or a private network for stadiums or entertainment venues could easily program their network to not allow upstream videos during a show, to protect broadcasting or intellectual property rights. This can be easily achieved by limiting the video uplink traffic while preserving voice, security and emergency traffic.

 

Another example would see a network actively dedicating deterministic capacity per connection during rush hour or based on threshold in a downtown core to guarantee that all users have access to video services with equally shared bandwidth and quality.

 

A last example could see first responder and emergency services get guaranteed high-quality access to video calls and broadcasts.

 

When properly integrated into a policy and service management framework for traffic slicing, Open RAN can be an efficient tool for adding fine grained traffic optimization rules, allowing a fairer apportioning of resource for all users, while preserving overall quality of experience.

 

Tuesday, October 3, 2023

Should regulators forfeit spectrum auctions if they cant resolve Net Neutrality / Fair Share?

I have been
writing about Net Neutrality and Fair Share broadband usage for nearly 10 years. Both sides of the argument have merit and it is difficult to find a balanced view represented in the media these days. Absolutists would lead you to believe that internet usage should be unregulated with everyone able to stream, download, post anything anywhere, without respect for intellectual property or fair usage; while on the other side of the fence, service provider dogmatists would like to control, apportion, prioritize and charge based on their interests.

Of course, the reality is a little more nuanced. A better understanding of the nature and evolution of traffic, as well as the cost structure of networks help to appreciate the respective parties' stance and offer a better view on what could be done to reduce the chasm.

  1. From a costs structure's perspective first, our networks grow and accommodate demand differently whether we are looking at fixed line / cable / fibre broadband or mobile. 
    1. In the first case, capacity growth is function of technology and civil works. 
      1. On the technology front, the evolution to dial up / PSTN  to copper and fiber increases dramatically to network's capacity and has followed ~20 years cycles. The investments are enormous and require the deployment, management of central offices and their evolution to edge compute date centers. These investments happen in waves within a relatively short time frame (~5 years). Once operated, the return on investment is function of the number of users and the utilisation rate of the asset, which in this case means filling the network with traffic.
      2. On the civil works front, throughout the technology evolution, a continuous work is ongoing to lay transport fiber along new housing developments, while replacing antiquated and aging copper or cable connectivity. This is a continuous burn and its run rate is function of the operator's financial capacity.
    2. In mobile networks, you can find similar categories but with a much different balance and impact on ROI.
      1. From a technology standpoint, the evolution from 1G to 5G has taken roughly 10 years per cycle. A large part of the investment for each generation is a spectrum license leased from the regulating / government. In addition to this, most network elements, from the access to the core and OSS /BSS need to be changed. The transport part relies in large part on the fixed network above. Until 5G, most of these elements were constituted of proprietary servers and software, which meant a generational change induced a complete forklift upgrade of the infrastructure. With 5G, the separation of software and hardware, the extensive use of COTS hardware and the implementation of cloud based separation of traffic and control plane, should mean that the next generational upgrade will be les expensive with only software and part of the hardware necessitating complete refresh.
      2. The civil work for mobile network is comparable to the fixed network for new coverage, but follows the same cycles as the technology timeframe with respect to upgrades and changes necessary to the radio access. Unlike the fixed network, though, there is an obligation of backwards compatibility, with many networks still running 2G, 3G, 4G while deploying 5G. The real estate being essentially antennas and cell sites, this becomes a very competitive environment with limited capacity for growth in space, pushing service providers to share assets (antennas, spectrum, radios...) and to deploy, whenever possible, multi technology radios.
The conclusion here is that you have fixed networks with long investment cycles and ROI, low margin, relying on number of connections and traffic growth. The mobile networks has shorter investment cycles, bursty margin growth and reduction with new generations.

What does this have to do with Net Neutrality / Fair Share? I am coming to it, but first we need to examine the evolution of traffic and prices to understand where the issue resides.

Now, in the past, we had to pay for every single minute, text, kb received or sent. Network operators were making money of traffic growth and were pushing users and content providers to fill their networks. Video somewhat changed that. A user watching a 30 seconds video doesn't really care / perceive if the video is at 720, 1080 or 4K, 30 or 60 fps. It is essentially the same experience. That same video, though can have a size variation of 20x depending on its resolution. To compound that issue, operators have foolishly transitioned to all you can eat data plans with 4G to acquire new consumers, a self inflicted wound that has essentially killed their 5G business case.

I have written at length about the erroneous assumptions that are underlying some of the discourses of net neutrality advocates. 

In order to understand net neutrality and traffic management, one has to understand the different perspectives involved.
  • Network operators compete against each other on price, coverage and more importantly network quality. In many cases, they have identified that improving or maintaining quality of Experience is the single most important success factor for acquiring and retaining customers. We have seen it time and again with voice services (call drops, voice quality…), messaging (texting capacity, reliability…) and data services (video start, stalls, page loading time…). These KPI are the heart of the operator’s business. As a result, operators tend to either try to improve or control user experience by deploying an array of traffic management functions, etc...
  • Content providers assume that highest quality of content (8K UHD for video for instance) equals maximum experience for subscriber and therefore try and capture as much network resource as possible to deliver it. Browser / apps / phone manufacturers also assume that more speed equals better user experience, therefore try to commandeer as much capacity as possible. 
The flaw here is the assumption that the optimum is the product of many maxima self-regulated by an equal and fair apportioning of resources. This shows a complete ignorance of how networks are designed, how they operate and how traffic flows through these networks.

This behavior leads to a network where resources can be in contention and all end-points vie for priority and maximum resource allocation. From this perspective one can understand that there is no such thing as "net neutrality" at least not in wireless networks. 

When network resources are over-subscribed, decisions are taken as to who gets more capacity, priority, speed... The question becomes who should be in position to make these decisions. Right now, the laissez-faire approach to net neutrality means that the network is not managed, it is subjected to traffic. When in contention, resources are managing traffic based on obscure rules in load balancers, routers, base stations, traffic management engines... This approach is the result of lazy, surface thinking. Net neutrality should be the opposite of non-intervention. Its rules should be applied equally to networks, devices / apps/browsers and content providers if what we want to enable is fair and equal access to resources.

As we are contemplating 6G, and hints of metaverse, augmented / mixed reality and hyper connectivity, the cost structure of network infrastructure hasn't yet been sufficiently decoupled from traffic growth and as we have seen, video is elastic and XR will be a heavy burden on the networks. Network operators have essentially failed so far to offer attractive digital services that would monetize their network investments. Video and digital services providers are already paying for their on premise and cloud infrastructure as well as transport, there is little chance they would finance telco operators for capacity growth.

Where does this leave us? It might be time for regulators / governments to either take an active and balanced role in Net Neutrality and Fair share to ensure that both side can find a sustainable business model or to forfeit spectrum auctions for next generations.

Monday, April 25, 2016

Mobile Edge Computing 2016 is released!



5G networks will bring extreme data speed and ultra low latency to enable Internet of Things, autonomous vehicles, augmented, mixed and virtual reality and countless new services.

Mobile Edge Computing is an important technology that will enable and accelerate key use cases while creating a collaborative framework for content providers, content delivery networks and network operators. 

Learn how mobile operators, CDNs, OTTs and vendors are redefining cellular access and services.

Mobile Edge Computing is a new ETSI standard that uses latest virtualization, small cell, SDN and NFV principles to push network functions, services and content all the way to the edge of the mobile network. 


This 70 pages report reviews in detail what Mobile Edge Computing is, who the main actors are and how this potential multi billion dollar technology can change how OTTs, operators, enterprises and machines can enable innovative and enhanced services.

Providing an in-depth analysis of the technology, the architecture, the vendors's strategies and 17 use cases, this first industry report outlines the technology potential and addressable market from a vendor, service provider and operator's perspective.

Table of contents, executive summary can be downloaded here.

Friday, March 18, 2016

For or against Adaptive Bit Rate? part V: centralized control

I have seen over the last few weeks much speculations and claims with T-Mobile's Binge On service launch and these have accelerated with yesterday's announcement of Google play and YouTube joining the service. As usual many are getting on their net neutrality battle horse using fraught assumptions and misconceptions to reject the initiative.

I have written at length about what ABR is and what are its pros and cons, you can find some extracts in the links at the end of this post. I'll try here to share my views and expose some facts to enable a more pragmatic approach.

I think we can safely assume that every actor in the mobile video delivery chain wants to enable the best user experience for users, whenever possible.
As I have written in the past, in the current state of affair, adaptive bit rate is often times corrupted in order to seize as much network bandwidth as possible, which results in devices and service providers aggressively competing for bits and bytes.
Content providers assume that highest quality of content (1080p HD video for instance) equals maximum experience for subscriber and therefore try and capture as much network resource as possible to deliver it. Browser / apps / phone manufacturers also assume that more speed equals better user experience, therefore try to commandeer as much capacity as possible. The flaw here is the assumption that the optimum is the product of many maxima self regulated by an equal and fair apportioning of resources. This shows a complete ignorance of how networks are designed, how they operate and how traffic flows through these networks.

An OTT cannot know why a user’s session downstream speed is degrading, it can just report it. Knowing why is important because it enables to make better decisions in term of the possible corrective actions that need to be undertaken to preserve the user’s experience. For instance, a reduction of bandwidth for a particular user can be the result of handover (4G to 3G or cells with different capacity), or because of congestion in a given cell or due to the distance between the phone and the antenna or whether a user enters a building, an elevator, or whether she is reaching her data cap and being throttled etc.… Reasons can be multiple and for each of them, a corrective action can have a positive or a negative effect on the user’s experience. For instance, in a video streaming scenario, you can have a group of people in a given cell streaming Netflix and others streaming YouTube. Naturally, the video streamed is in progressive download adaptive bit rate format, which means that the stream will try to increase to the highest available download bit rate to deliver the highest video definition possible. All sessions will theoretically increase the delivered definition up to the highest available or the highest delivery bit rate available, whichever comes first. In a network with much capacity, everyone ramps up to 1080p and everyone has a great user experience.

More often than not, though, that particular cell cannot accommodate everyone’s stream at the highest definition at the same time. Adaptive bit rate is supposed to help there again by stepping down definition until it fits within available delivery bit rate. It unfortunately can’t work like that when we are looking at multiple sessions from multiple OTTs. Specifically, as soon as one player starts reducing its definition to meet lower bit rate delivery, that freed-up bandwidth is grabbed by other players, which can now look at increasing even more their definition. There is no incentive for content provider to reduce bandwidth fast to follow network condition, because they can become starved by their competition in the same cell.

The solution here is simple, the delivery of ABR video content has to be managed and coordinated between all providers. The only way and place to provide this coordination is in the mobile network, as close to the radio resource as possible. [...]

This and more in my upcoming Mobile Edge Computing report.


Part I:What is ABR?
Part II: For ABR
Part III:Why isn't ABR more succesful
Part IV: alernatives

Tuesday, March 15, 2016

Mobile QoE White Paper




Extracted from the white paper "Mobile Networks QoE" commissioned by Accedian Networks. 

2016 is an interesting year in mobile networks.  Maybe for the first time, we are seeing tangible signs of evolution from digital services to mobile-first. As it was the case for the transition from traditional services to digital, this evolution causes disruptions and new behavior patterns in the ecosystem, from users to networks, to service providers.
Take for example social networks. 47% of Facebook users access the service exclusively through mobile and generate 78% of the company’s ad revenue. In video streaming services, YouTube sees 50% of its views on mobile devices and 49% Netflix’ 18 to 34 years old demographics watch it on mobile.
This extraordinary change in behavior causes unabated traffic growth on mobile networks as well a changes in the traffic mix. Video becomes the dominant use that pervades every other aspect of the network. Indeed, all involved in the mobile value chain have identified video services as the most promising revenue opportunity for next generation networks. Video services are rapidly becoming the new gold rush.


“Video services are the new gold rush”
Video is essentially a very different animal from voice or even other data services. While voice, messaging and data traffic can essentially be predicted fairly accurately as a function of number and density of subscribers, time of day and busy hour patterns, video follows a less predictable growth. There is a wide disparity in consumption from one user to the other, and this is not only due to their viewing habits. It is also function of their device screen size and resolution, the network that they are using and the video services they access. The same video, viewed on a social sharing site on a small screen or on full HD or at 4K on a large screen can have a 10 -20x impact on the network, for essentially the same service.


Video requires specialized equipment to manage and guarantee its quality in the network, otherwise, when congestion occurs, there is a risk that it consumes resources effectively denying voice, browsing, email and other services fair (and necessary) access to the network.
This unpredictable traffic growth results in exponential costs for networks to serve the demand.
As mobile becomes the preferred medium to consume digital content and services, Mobile Network Operators (MNOs), whose revenue was traditionally derived from selling “transport,” see their share squeezed as subscribers increasingly value content and have more and more options in accessing it. The double effect of the MNOs’ decreasing margins and increasing costs forces them to rethink their network architecture.
New services, on the horizon such as Voice and Video over LTE (VoLTE & ViLTE), augmented and virtual reality, wearable and IoT, automotive and M2M will not be achievable technologically or economically with the current networks.

Any architecture shift must not simply increase capacity; it must also improve the user experience. It must give the MNO granular control over how services are created, delivered, monitored, and optimized. It must make best use of capacity in each situation, to put the network at the service of the subscriber. It must make QoE — the single biggest differentiator within their control — the foundation for network control, revenue growth and subscriber loyalty.
By offering exceptional user experience, MNOs can become the access provider of choice, part of their users continuously connected lives as their trusted curator of apps, real-time communications, and video.


“How to build massively scalable networks while guaranteeing Quality of Experience?”

As a result, the mobile industry has embarked on a journey to design tomorrow’s networks, borrowing heavily from the changes that have revolutionized enterprise IT departments with SDN (Software Defined Networking) and innovating with 5G and NFV (Networks Functions Virtualization) for instance. The target is to emulate some of the essential attributes of innovative service providers such as Facebook, Google and Netflix who have had to innovate and solve some of the very same problems.


QoE is rapidly becoming the major battlefield upon which network operators and content providers will differentiate and win consumers’ trust.  Quality of Experience requires a richly instrumented network, with feedback telemetry woven through its fabric to anticipate, detect, measure any potential failure.

Tuesday, February 2, 2016

How to Binge On?

So... you have been surprised, excited,. curious about T-Mobile US Binge On launch
The innovative service is defining new cooperative models with so-called OTT by blending existing and new media manipulation technologies.

You are maybe wondering whether it would work for you? Do you know what it would take for you to launch a similar service?
Here is a quick guide of what you might need if you are thinking along those lines.


The regulatory question

First, you probably wonder whether you can even launch such a service. Is it contravening any net neutrality rule? The answer might be hard to find. Most net neutrality provisions are vague, inaccurate or downright technologically impossible to enforce, so when launching a new service, the best one can have is an opinion.
MNOs have essentially two choices, either not innovating and launching endless minute variations of existing services or launching innovative services. The latter strategy will always have a measure of risk, but MNOs can't aspire to be disruptive without risk taking. In this case, the risk is fairly limited, provided that the service is voluntary, with easy opt in /  opt out. There are always going to be challenges - even legal - to that operating assumption, but operators have to accept that as part of the cost of innovation. In other words, if you want to create new revenues streams, you have to grow some balls and take some risks, otherwise, just be a great network and abandon ambition to sell services.


The service

For those not familiar with Binge On, here is a quick overview. Binge On allows any new or existing subscribers with a 3GB data plan or higher to stream for free videos from over 4o popular content providers including Netflix, Hulu, HBO and ESPN.
The videos are zero rated (do not count towards the subscriber's quota) and are limited to 480p definition.
The service is free.

The content

Obviously, in the case of Binge On, the more content providers with rich content sign on for the service, the richer and the more attractive the offering. T-Mobile has been very smart to entice some of the most popular video services to sign on for Binge on. Netflix and HBO have a history of limited collaboration with few network operators, but no MNO to date has been able to create such a rich list of video partnerships.
Experience proves that the key to successful video services is breadth, depth and originality of the content. In this case, T-Mobile has decided not to intervene in content selection, simply allowing some of the most popular video services to participate in the service.
Notably, Facebook, Twitter, Google, Apple and Amazon properties are missing, with YouTube claiming technical incompatibility to participate.

The technology

What does the service entails technically? The first functionality a network needs to enable such a service is to discriminate content from a participating video provider versus other services. In some cases, when traffic is not encrypted, it is just a matter of creating a rule in the DPI or web gateway engine to apply zero rating to a specific content / origin. 

Picking out Netflix traffic out of the rest of the videos is not necessarily simple, since many premium video service providers deliver their service over encrypted protocols, to avoid piracy or privacy issues. The result is certainly that there is a level of integration that is necessary for the network to unambiguously detect a video session from Netflix. 

In this case, unencrypted metadata can be used in the headers to identify the service provider and even the content. That is not all, though as conceivably, some services might not be exclusively video. If we imagine a service like Facebook being part of Binge one, the network now needs to theoretically separate browsing traffic from video. This can be achieved with traffic management platforms that are usually deploying heuristics or algorithm to segregate traffic from a same source looking at packet size, session duration, packet patterns, etc.

Now that you are able to discriminate the content from participating partners, you need to tie it to subscribers that have opted in or opted out for this service. This usually is performed in the PCRF charging function or in the EPC where the new service is created. A set of rules are assembled to associate the list of content providers with a zero-rated class of service and associate a subscriber class with these services. The subscriber class is a toggled setting in the subscriber profile that resides in the subscriber database. As a subscriber starts a HBO episode, the network detects that this service is part of Binge on and looks up whether that user is subscribed or not to the service and applies the corresponding rate code. As a result, the amount of data consumed for this session is either accumulated and deduced from the subscriber's quota or not depending on whether she is a Binge On user.

We are almost done.

The big gamble taken by T-Mobile is that customers will trade unlimited quality for unlimited content. Essentially, the contract is that those who opt in Binge On will be able to stream unlimited video from participating providers at the condition that the video's definition is limited to 480p. In many cases, this is an acceptable quality for phones and tablets, as long as you do not hotspot the video to a laptop or a TV.
That limitation is the quid pro quo that T-Mobile is enforcing, allowing them to be able to have cost and service quality predictability.

That capability requires more integration between content provider and T-Mobile. 480p is an objective video display target that is usually describing a 640 x 480 pixels picture size. Videos encoded at that definition will vary in size, depending on the codec used, the number of frames per seconds and other parameters.

Most premium video providers in Binge ON are delivering them using adaptive bit rate, essentially delivering a number of possible video streams ranging from low to high definition. In this case, T-Mobile and the content provider have to limit the format up to 480p. This could be done by the content provider, of course, since it has all the formats. They could decide to send only 480p and lower versions, but that would be counter productive. The content provider does not know whether the subscriber is opted in to Binge On or not and that information that belongs to T-Mobile cannot be freely shared.
As a result, content providers are sending the video in their usual definition, leaving T-Mobile with the task to select the right format.

There are several ways to achieve that. The simplistic approach is just to limit the delivery bit rate so that the phone can never select more than 480p. This is a hazardous approach, because 480p encoding can result in bit rate delivery demand ranging from 700 to 1.5 Mbps depending on the codec being used. This is too wide to provide any guarantee by T-Mobile. Set the setting too low and some providers will never achieve 480 p. Set it too high and subscribers will have fluctuating quality with even 720 or 1080p formats.
The best way to achieve the desired result is to intercept the adaptive bit rate manifest delivered by the content provider at the establishment of the session and strip out all definitions above 480p. This guarantees that the video will never be delivered above 480p but can still fluctuate based on network's congestion. This can be achieved either with a specialized video optimization platform or in some of the more advanced EPC.

As we can see, the service is sophisticated and entails several steps. A network's capacity to deploy such a service is directly linked to its ability to link and instantiate services and network functions in an organic manner. Only the most innovative EPC, traffic detection and video management functions vendors can provide the flexibility and cost effectiveness to launch such a service.


Monday, December 21, 2015

Bytemobile: what's next?

Following the brutal announcement of Bytemobile's product line discontinuation by Citrix, things are starting to get a little clearer in term of what the potential next steps could be for their customers.

Citrix was market leader in terms of number of deployments and revenue in the video optimization market when it decided to kill this product offering due to internal strategic realignment. The news left many customers confused as to what - if any- support they can expect from the company.

Citrix' first order of action over the last month has been to meet with every major account to reassure them that the transition will follow a plan. What transpires at this point in time is that a few features from ByteMobile T-3100 product family will be migrated to NetScaler probably towards the end of 2016. Citrix is still in the process of circling the wagons at this stage and seems to be trying to evaluate the business case for the transition, which will condition the amount of feature and the capacity to reach feature parity.

In many cases, network operators who have deployed versions of ByteMobile T-3100 have been put on notice to upgrade to the latest version, as older versions will see end of support notices going out next year.

Concurrently, presumably, Citrix won't be able to confirm NetScaler's detailed roadmap and transition plan until they have a better idea in term of the number and type of customers that will elect to migrate.

In the meantime, ByteMobile's historical competitors are drawing battle plans to take advantage of this opportunity. A forklift upgrade is never an easy task to negotiate and, no doubt, there will be much pencil sharpening in the new year in core networks procurement departments.

Video optimization market has dramatically changed over the last year. The growth in encrypted traffic, the uncertainty surrounding Citrix and the net neutrality debate has change the feature set operators have been looking for.
Real-time transcoding orders have severely reduced because of costs and encryption, while TCP optimization, encrypted traffic analytics, video advertising and adaptive bit rate management are gaining increasing favors.

The recent T-Mobile USA "Binge On" offering, providing managed video for premium services is also closely followed by many network operators and will in all likeliness create more interest for video management collaboration solutions.

As usual, this and more in my report on video monetization.

Friday, November 20, 2015

Citrix shuts down ByteMobile

Citrix has decided to "de-invest" in the ByteMobile product line that was initially reported for sale. Citrix provided this week an update in an investor's call, on the results of its strategic review that was announced in September.
Executives commented:
"The underlying premises for the acquisition of ByteMobile have now vanished.We acquired the company for its ability to optimize video traffic,but today a significant amount of the video traffic is encrypted and can no longer be optimized. [...] We will transition some of the capabilities in the NetScaler product but for the most part phasing that product line out."
The company mentioned that ByteMobile revenue for 2015 were expected around $50m and breaking even. XenServer will also be discontinued (unsurprisingly looking at VMWare and KVM's relative success).

Citrix had acquired Bytemobile in 2012 for $435m as the company was leading the video optimization market segment.

The video optimization market has greatly suffered as a stand alone value proposition on the combined pressure from the growth of encrypted video traffic, and the uncertainty surrounding ByteMobile's future, the market segment leader in terms of installed base. The vendors in the space have bundled the technology into larger offerings ranging from policy enforcement, video analytics and video advertising and monetization. Last week, T-Mobile introduced its "Binge-on" video plan based on video optimization of adaptive bit rate traffic, and multiple vendors have been announcing support of encrypted video traffic management solutions.

Further review of the video optimization market size and projection, vendors and strategies available in workshop and report format.

Thursday, November 12, 2015

All you need to know about T-Mobile Binge On




Have you been wondering what is T-Mobile US doing with your video on Binge On?
Here is a small guide and analysis of the service, its technology, features and limitation.

T-Mobile announced at its Uncarrier X event on November 11 the launch of its new service Binge On. The company's CEO remarked that video is the fastest growing data service with +145% compared to 2 years ago and that consumers are increasingly watching video on mobile devices, in wireless networks and cutting the cord from their cable and satellite TV providers. Binge on was created to meet these two market trends.

I have been previewing many of the features launched with Binge on in my video monetization report and my blog posts (here and here on encryption and collaboration) over the last 4 years.


Binge On allows any new or existing subscribers with a 3GB data plan or higher to stream for free videos from a number of apps and OTT properties. Let's examine what the offer entails:

  1. Subscribers with 3GB data plans and higher are automatically opted in. They can opt out at any moment and opt back in when they want. This is a simple mechanism that allows service transparency, but more importantly underpins the claim of Net Neutral service. I have pointed out for a long time that services can be managed (prioritized, throttled, barred...) as long as subscribers opt in for these. Video optimization falls squarely in that category and T-Mobile certainly heeded my advice in that area. More on this later.
  2. Services streaming free in Binge on are: Crackle, DirecTV, Encore, ESPN, Fox Sports, Fox Sports GO, Go90, HBO GO, HBO NOW, Hulu, Major League Baseball, Movieplex, NBC Sports, Netflix, Showtime, Sling Box, Sling TV, Starz, T-Mobile TV, Univision Deportes, Ustream, Vessel, Vevo, VUDU.
  3. You still have to register / subscribe to the individual services to be able to stream free on T-Mo network.
  4. Interestingly, no Google properties (YouTube) or Facebook included yet. Discussions are apparently ongoing.
  5. These OTT video services maintain their encryption, so the content and consumer interactions are safe. 
  6. There were mentions of a mysterious "T-Mobile proprietary streaming technology and video optimization" that requires video service providers to integrate with T-Mobile. This is not transcoding and relies on adaptive bit rate optimization, ranging from throttling data to transrating, to manifest manipulation (ask video providers to enable un-encrypted manifest so that it can be edited and limited to 480p definition).
  7. Yep, video is limited at 480p definition, which T-Mobile defines as DVD quality. It's going to look good on a smartphone, ok on a tablet and bad on anything bigger / tethered.
  8. I have issue with the representation "We've optimized streaming so that you can watch 3x more video" because mostly it's: 
  9. File size per hour of streamed video per definition
    1. Inaccurate (if this is unlimited, how can unlimited be 3x what you are currently watching?); 
    2. Inexact (if they are referring to the fact that a 480p file could in average be 1/3 of the size of a 1080p file, which is close enough), they are assuming wrongly that you are only watching HD 1080p video, while most of these providers rely on adaptive bit rate, therefore varying the video definition based on the networks' conditions.
    3. Wrong since most people assume watching 3X more video means spending 3X the amount of time watching video, rather than 3X the file size.
    4. Of bad faith, since T-Mobile limited video definition so that users wouldn't kill its network. Some product manager / marketing drone decided to turn this limitation into a feature...
  10. Now in the fine prints, on the rest of the video you watch that are not part of the package, expect that "Once high-speed data allotment is reached, all usage slowed to up to 2G speeds until end of bill cycle." 2G speed? for streaming video?  like watching animated GIF? That's understandable, though, there has to be an carrot (and a stick) for providers who have not joined yet, as well as some fair usage rules for subscriber breaching their data plans - but 2G speed? come on, might as well stop the stream rather than pretend that you can stream anything on 128 kbps.
  11. More difficult to justify is the mention "service might be slowed, suspended, terminated, or restricted for misuse, abnormal use, interference with our network or ability to provide quality service to other users". So basically, there is no service level agreement for minimum quality of service. Ideally, if a video service is limited to 480p (when you are paying Netflix, etc. for 1080p or even 4K, let's remember), one should expect either guaranteed level or a minimum quality floor?
  12. Another vague and spurious rule is "Customers who use an extremely high amount of data in a bill cycle will have their data usage de-prioritized compared to other customers for that bill cycle at locations and times when competing network demands occur, resulting in relatively slower speeds. " This is not only vague and subjective, it will vary over time depending on location (with a 145% growth in 2 years, an abnormal video user today will be average tomorrow). More importantly, it goes against some of the net neutrality rules
T-Mobile innovates again with a truly new approach to video services. Unlike Google's project Fi, it is a bold strategy, relying on video optimization to provide a quality ceiling, integration with OTT content providers to enable the limitation but more importantly an endorsement of the service. It is likely that the service will be popular in terms of adoption and usage, it will be interesting to see, as its user base grows how user experience will evolve over time. At least, there is now a fixed ceiling for video, which will allow for network capacity planning, removing variability. What is the most remarkable in the launch, from my perspective is the desire to innovate and to take risks by launching a new service, even if there are some limitations (video definition, providers...) and risks (net neutrality).

Want to know more about how to launch a service like Binge on? What technology, vendors, price models...? You can find more in my video monetization reports and workshop.

Friday, September 4, 2015

Video is eating the internet: clouds, codecs and alliances

A couple of news should have caught your attention this week if you are interested the video streaming business.

Amazon Web Services confirmed yesterday the acquisition of Elemental. This is the outcome of a trend that I was highlighting in my SDN / NFV report and workshops for the last year with the creation of specialized clouds. Elemental's products are software based and the company was the first in professional video to offer cloud-based encoding on Amazon EC2 with a PaaS offering. Elemental has been building virtual private clouds on commercial clouds for their clients and was the first to coin the term "Software Defined Video". As Elemental joins AWS, Amazon will be one of the first commercial clouds to offer a global, turnkey, video encoding, workflow, packaging infrastructure in the cloud. Video processing requires specific profiles in a cloud environment and it is not surprising that companies who have cloud assets look at creating cloud slices or segregated virtual environment to manage the processing heavy, latency sensitive service.


The codec war has been on for a long time, and I had previously commented on it. In other news, we have seen Amazon again join Cisco, Google, Intel, Microsoft Mozilla and Netflix in the Alliance for Open Media. This organization's goal is to counter unreasonable claims made by H.265 / HEVC patent holders called HEVC advance who are trying to create a very vague and very expensive licensing agreement for the use of their patents. The group, composed of Dolby, GE, Mitsubishi Electric and Technicolor is trying to enforce a 0.5% fee on any revenue associated with the codec's use. The license fee would apply indiscriminately to all companies who encode, decode, transmit, display HEVC content. If H.265 was to be as successful as H.264, it would account in the future for over 90% of all video streaming traffic and that 0.5% tax would be presumably levied on any content provider, aggregator, APP, web site... HEVC advance could become the most profitable patent pool ever, with 0.5% of the revenues of Google, Facebook or Apple's video business. The group does not stop there and proposes a license fee on devices as well, from smartphones, to tablets, to TVs or anything that has a screen and a video player able to play H.265 videos... Back to the Alliance for Open Media who has decided to counter attack and vows to create a royalty-free next generation video codec. Between Cisco's Thor, Google's VPx and Mozilla Daala, this is a credible effort to counter HEVC advance.


The Streaming Video Alliance, created in 2014 to provide a forum for TV, cable, content owners and service providers to improve the internet video streaming experience welcomes Sky and Time Warner Cable to the group already composed of Alcatel-Lucent, Beamr, CableLabs, Cedexis, Charter Communications, Cisco, Comcast, Conviva, EPIX, Ericsson, FOX Networks, Intel, Irdeto, Korea Telecom, Level 3 Communications, Liberty Global, Limelight Networks, MLB Advanced Media, NeuLion, Nominum, PeerApp, Qwilt, Telecom Italia, Telstra, Ustream, Verizon, Wowza Media Systems and Yahoo!. What is remarkable, here is the variety of the group, where MSOs, vendors, service providers are looking at transparent caching architectures and video metadata handling outside of the standards, to counter specialized video delivery networks such as Apple's, Google's and Netflix'

All in all, video is poised to eat the internet and IBC, starting next week will no doubt bring a lot more exciting announcements. The common denominator, here is that all these companies have identified that encoding, managing, packaging, delivering video well will be a crucial differentiating factor in tomorrow's networks. Domination of only one element of the value chain (codec, network, device...) will guarantee great power in the ecosystem. Will the vertically integrated ecosystems such as Google and Apple yield as operators, distributor and content owners organize themselves? This and much more in my report on video monetization in 2015.

Tuesday, April 14, 2015

Video Monetization 2015 report and market shares released

Live from Las Vegas, where I am at NAB, for the week, the mobile video monetization and optimization 2015 report is now released. You can find the updated description and executive summary there, as usual, table of contents and terms are available upon request, do not hesitate to contact me (patrick.lopez@coreanalysis.ca).

As usual, I provide market share calculations in term of deployment per vendor, the unit being one operator / country. For instance, Verizon Wireless counts for one deployment, even though the operator might deploy 40+ data centres. Groups such as Vodafone, Deutsche Telekom or Telefonica count for each of the properties where the technology is deployed.

For this 2015 edition, we have seen quite a lot of changes year on year and an acceleration from the trends highlighted in last update, ranging from the continuing growth of mobile data and video traffic, complicated by the increasing encryption and privacy concerns. 


Emerging markets and MVNO with smaller volumes fuel the growth with lower price points and tier 1 replacements are slowing down due to regulatory uncertainty. It is hard to predict how long this is going to last, but I am betting on a protracted battle and operators slowly having to take investment decisions despite uncertainty because their network is under too much pressure. TCP optimization, caching, throttling will continue to lead engagements in countries under strong regulatory mandate or uncertainty, while transcoding, DBRA and other lossy technologies will continue to lead in emerging and weak regulatory environments.

The mobile video monetization and optimization market segment researched in this report is composed of 8 primary vendors.

2015 has seen a great change in market shares, as indicated in the previous reports and throughout my quarterly updates. You can find the fall's market shares here, if you want to track the vendors' progression.
  1. Citrix keeps its historical market leader spot, with a slight progression to 32%. 
  2. Flash Networks had lost the number 1 spot last update and is maintaining itself at 31%. 
  3. Openwave is solidly in third place, growing to 13%.
  4. Fourth place is now claimed by Allot, with the fastest progression this update to 7%, 
  5. Vantrix is in a slight decline at 6%. 
  6. Nokia declines to 5% and has decided to resell Flash networks going forward. 
  7. Opera has declined to 4%. 
  8. Avvasi closes the market share with a growth to 2%.
The market share calculations are based on a proprietary {Core Analysis} database, collecting data such as vendors, re-sellers, value of the deployment in term of total cost of ownership for the operator, operator name, country, region and number of mobile broadband subscribers. These data are cross-referenced from vendors' and operators' individual disclosures. This database also includes over 130 opportunities in video optimization that are at different stage of maturity (internal evaluation, vendor trial, RFI, RFx...) and will close over the next 18 months.


To understand the vendors' trajectory, velocity and strategy better, contact me.

Tuesday, March 31, 2015

Net neutrality... so what?


[...] 
In the US, on February 26, days before the mobile world congress, the Federal Communications Commission released a declaratory ruling on “protecting and promoting the open internet”. The reclassification of fixed and mobile network services under title II telecom services by the FCC means in substance that network operators will be prevented from blocking, throttling, and prioritizing traffic and will have to be transparent in the way their traffic management rules are applied.  This is essentially due to an earlier ruling from the DC circuit Verizon v. FCC that struck down FCC’s rules against blocking and traffic discrimination but remarked that “broadband providers represent a threat to Internet openness and could act in ways that would ultimately inhibit the speed and extent of future broadband deployment.”

It is a great issue that broadband providers in this case are exclusively network operators, and not OTT providers, who have, in my mind the same capacity and have a similar track record in that matter. The FCC tried to provide “more broadband, better broadband and open broadband” and in its haste has singled out one party of the ecosystem, essentially condemning network operators to a utility model. This nearsightedness is unlikely to continue as several companies have already decided to challenge it. Less than a month after its publication, the order is being challenged in court by the United States Telecom Association, a lobbying group representing the broadband and wireless network operators as well as Alamo, broadband provider in Louisiana. There is no doubt that legal proceedings will occupy and fatten lawyers on both sides for years to come.

In Europe, the net neutrality debate is also far from being settled. After the European Commission seemed to take a no throttling, no blocking no prioritization stance in its “Digital single market” initiative, network operators, throughout their lobbying arm ETNO (European Telecommunications Network Operators’ association) started to challenge these provisions at the country level. Since the European Commission has not yet passed a law on the subject, the likeliness of a strong net neutrality stance will depend on support from each nation. In November 2014, compromises in the form of “non-discriminatory and proportionate” plans were discussed. The result is that net neutrality is still very much a moving target, with a lot of efforts being expanded to enable a managed internet experience, with a fast and a best effort lane. The language and ideas surrounding net neutrality is very vague suggesting either a great lack of technical expertise or a reluctance to provide an enforceable guidance (or both). It is more likely that countries at their individual level will start passing law to regulate some aspects of traffic management until a consensus is found at the European level.


In conclusion, there is obviously much debate over net neutrality globally, with many emotional, commercial, technical implications. There is at this stage no evidence of any regulatory authority having a good grasp of both the technical and commercial realities today to make a fair and enforceable ruling. As a result, politics, public sentiments, lobbying and lawyers will dictate the law for the next 5 years. In the meantime, it is likely that loopholes will be found and that collaborative approaches will show a lucrative business model that is likely to make the whole debate obsolete.

More analysis on traffic encryption, mobile advertising, data, video, mobile and media trends in  "Mobile video monetization 2015".