Wednesday, November 2, 2016

TIPping point

For those of you familiar with this blog, you know that I have been advocating for more collaboration between content providers and network operators for a long time (here and here for instance). 

In my new role at Telefonica, I support a number of teams of talented intra-preneurs, tasked with inventing Telefonica's next generation networks, to serve the evolving needs of our consumers, enterprises and things at a global level. Additionally, connecting the unconnected and fostering sustainable, valuable connectivity services is a key mandate for our organization.

Very quickly, much emphasis has been put in delivering specific valuable use cases, through a process of hypothesis validation through prototyping, testing and commercial trials in compressed time frames. I will tell you more about Telefonica's innovation process in a future blog.

What has been clear is that open source projects, and SDN have been a huge contributing factor to our teams' early successes. It is quite impossible to have weekly releases, innovation sprints and rapid prototyping without the flexibility afforded by software-defined networking. What has become increasingly important, as well, is the necessity, as projects grow and get transitioned to our live networks to prepare people and processes for this more organic and rapid development. There are certainly many methodologies and concepts to enhance teams and development's agility, but we have been looking for a hands-on approach that would be best suited to our environment as a networks operator.

As you might have seen, Telefonica has joined Facebook's Telecom Infra Project earlier this year and we have found this collaboration helpful. We are renewing our commitment and increase our areas of interest beyond the Media Friendly Group and the Open Cellular Project with the announcement of our involvement with the People and Processes group. Realizing that - beyond technology- agility, adaptability, predictability and accountability are necessary traits of our teams, we are committing ourselves to sustainably improve our methods in recruitment, training, development, operations and human capital.

We are joining other network operators that have started - or will start- this journey and looking forward to share with the community the results of our efforts, and the path we are taking to transform our capabilities and skills.

Monday, July 25, 2016

Thank you!

This post is going to be a little different from what you usually read here. 
 
​​As you well know, the telecoms market is evolving fast and consumers' media needs, even faster. 

If you've read this blog or remember some of my conference speeches, you know that I have been advocating that operators need to change their game if they want to remain competitive. 

It was hard to turn down when I was approached to get a chance to try and do just that. I have accepted a position at Telefonica group's research and development HQ, in Madrid, where I will help with service innovation.

As my family and I start this great and exciting move, I want to extend my thanks and gratitude to the 60+ client companies who used my services over the last 5 years, namely:



I am looking forward to staying in touch with you all in my new capacity. I am not sure yet what will happen to this blog, stay tuned for more updates in the near future.

Thanks at last to all of you who have given me frequent marks of support over the last 5 years, over this blog, my linked in posts and groups and at conferences.

"If you always do what you've always done, you'll always get what you always got."


Tuesday, June 21, 2016

SDN / NFV: Enemy of the state

Extracted from my SDN and NFV in wireless workshop.

I want to talk today about an interesting subject I have seen popping up over the last six months or so and in many presentations in the stream I chaired at the NFV world congress a couple of months ago.

In NFV and to a certain extent in SDN as well, service availability is achieved through a combination of functions redundancy and fast failover routing whenever a failure is detected in the physical or virtual fabric. Availability is a generic term, though and covers different expectations whether you are a consumer, operator or enterprise. The telecom industry has heralded the mythical 99.999% or five nines availability as the target to reach for telecoms equipment vendors.

This goal has led to networks and appliances that are super redundant, at the silicon, server, rack and geographical levels, with complex routing, load balancing and clustering capabilities to guarantee that element failures do not impact catastrophically services. In today's cloud networks, one arrives to the conclusion that a single cloud, even tweaked can't performed beyond three nines availability and that you need a multi-cloud strategy to attain five nines of service availability...

Consumers, over the last ten years have proven increasingly ready to accept a service that might not be always of the best quality if the price point is low enough. We all remember the start of skype when we would complain of failed and dropped calls or voice distortions, but we all put up with it mostly because it was free-ish. As the service quality improved, new features and subscriptions schemes were added, allowing for new revenues as consumers adopted new services.
One could think from that example that maybe it is time to relax the five nines edict from telecoms networks but there are two data points that run counter to that assumption.


  1. The first and most prominent reason to keep a high level of availability is actually a regulatory mandate. Network operators operate not only a commercial network but also a series of critical infrastructure for emergency and government services. It is easy to think that 95 or 99% availability is sufficient until you have to deliver 911 calls, where that percentage difference means loss of life.
  2. The second reason is more innate to network operators themselves. Year after year, polls show that network operators believe that the way they outcompete each others and OTTs in the future is quality of service, where service availability is one of the first table stakes. 


As I am writing this blog, SDN and NFV in wireless have struggled through demonstrating basic load balancing and static traffic routing, to functions virtualization and auto scaling over the last years. What is left to get commercial grade (and telco grade) offerings is resolving the orchestration bit (I'll write another post on the battles in this segment) and creating a service that is both scalable and portable.

The portable bit is important, as a large part of the value proposition is to be able to place functions and services closer to the user or the edge of the network. To do that, an orchestration system has to be able to detect what needs to be consumed where and to place and chain relevant functions there.
Many vendors can demonstrate that part. The difficulty arises when it becomes necessary to scale in or down a function or when there is a failure.

Physical and virtual functions failure are to be expected. When they arise in today's systems, there is a loss of service, at least for the users that were using these functions. In some case, the loss is transient and a new request / call will be routed to another element the second time around, in other cases, it is permanent and the session / service cannot continue until another one is started.

In the case of scaling in or down, most vendors today will starve the virtual function and route all new requests to other VMs until this function can be shut down without impact to live traffic. It is not the fastest or the most efficient way to manage traffic. You essentially lose all the elasticity benefits on the scale down if you have to manage these moribund zombie-VNFs until they are ready to die.

Vendors and operators who have been looking at these issues have come to a conclusion. Beyond the separation of control and data plane, it is necessary to separate further the state of each machine, function service and to centralize it in order to achieve consistent availability, true elasticity and manage disaster recovery scenarios.

In most cases, this is a complete redesign for vendors. Many of them have already struggled to port their product to software, then port it to hypervisor, then optimized for performance... separating state from the execution environment is not going to be just another port. It is going to require redesign and re architecting.

The cloud-native vendors who have designed their platform with microservices and modularity in mind have a better chance, but there is still a series of challenges to be addressed. Namely, collecting state information from every call in every function, centralizing it and then redistribute it is going to create a lot of signalling traffic. Some vendors are advocating some inline signalling capabilities to convey the state information in a tokenized fashion, others are looking at more sophisticated approaches, including state controllers that will collect, transfer and synchronize relevant controllers across clouds.
In any case, it looks like there is still quite a lot of work to be done in creating truly elastic and highly available virtualized, software defined network.

Monday, June 13, 2016

Time to get out of consumer market for MNOs?

I was delivering a workshop on SDN / NFV in wireless, last week, at a major pan-european tier one operator group and the questions of encryption and net neutrality were put again on the table.

How much clever, elastic, agile software-defined traffic management can we really expect when "best effort" dictates the extent of traffic management and encryption renders many efforts to just understand traffic composition and velocity difficult?

There is no easy answer. I have spoken at length on both subjects (here and here, for instance) and the challenges have not changed much. Encryption is still a large part of traffic and although it is not growing as fast as initially planned after Google, Netflix, Snapchat or Facebook's announcements it is still a dominant part of data traffic. Many start to think that HTTPS / SSL is a first world solution, as many small and medium scale content or service providers that live on a freemium or ad-sponsored models can't afford the additional cost and latency unless they are forced to. Some think that encryption levels will hover around 50-60% of the total until mass adoption of HTTP/2 which could take 5+ years. We have seen, with T-Mobile's binge on  a first service launch that actively manages traffic, even encrypted to an agreed upon quality level. The net neutrality activists cried fool at the launch of the service, but quickly retreated when they saw the popularity and the first tangible signs of collaboration between content providers, aggregators and operators for customers' benefit.

As mentioned in the past, the problem is not technical, moral or academic. Encryption and net neutrality are just symptoms of an evolving value chain where the players are attempting to position themselves for dominance. The solution with be commercial and will involve collaboration in the form of content metadata exchange, to monitor, control and manage traffic. Mobile Edge Computing can be a good enabler in this. Mobile advertising, which is still missing over 20b$ in investment in the US alone when compared to other media and time spent / eyeball engagement will likely be part of the equation as well.

...but what happens in the meantime, until the value chain realigns? We have seen consumer postpaid ARPU declining in most mature markets for the last few years, while we seen engagement and usage of so-called OTT services explode. Many operators continue to keep their head in the sand and thinking of "business as usual" while timidly investigating new potential "revenue streams".

I think that the time has come for many to wake up and take hard decisions. In many cases, operators are not equipped organizationally or culturally for the transition that is necessary to flourish in a fluid environment where consumer flock to services that are free, freemium, or ad sponsored. What operators know best, subscription services see their price under intense pressure because OTTs are looking at usage and penetration at global levels, rather than per country. For these operators who understand the situation and are changing their ways, the road is still long and with many obstacles, particularly on the regulatory front, where they are not playing by the same rules as their OTT competition.

I suggest here that for many operators, it is time to get out. You had a good run, made lots of money on consumer services through 2G, 3G and early 4G, the next dollars or euros are going to be tremendously more expensive to get than the earlier.
At this point, I think there are emerging and underdeveloped verticals (such as enterprise and IoT) that are easier to penetrate (less regulatory barriers, more need for managed network capabilities and at least in the case of enterprise, more investment possibilities).
I think that at this stage, any operator who derives most of its revenue from consumer services should assume that these will likely dwindle to nothing unless drastic operational, organizational and cultural changes occur.
Some operator see the writing on the wall and have started the effort. There is no guarantee that it will work, but certainly having a software defined, virtualized elastic network will help if they are betting the farm on service agility. Others are looking at new technologies, open source and standards as they have done in the past. Aligning little boxes from industry vendors in neat powerpoint roadmap presentations, hiring a head of network transformation or virtualization... for them, the reality, I am afraid will come hard and fast. You don't invest in technologies to build services. You build services first and then look at whether you need more or new technologies to enable them.

Thursday, May 5, 2016

MEC: The 7B$ opportunity

Extracted from Mobile Edge Computing 2016.
Table of contents



Defining an addressable market for an emerging product or technology is always an interesting challenge. On one hand, you have to evaluate the problems the technology solves and their value to the market, and on the other hand, appreciate the possible cost structure and psychological price expectations from the potential buyer / users.

This warrants a top down and bottoms up approach to look at how the technology can contribute or substitute some current radio and core networks spending, together with a cost based review of the potential physical and virtual infrastructure. [...]

The cost analysis is comparatively easy, as it relies on well understood current cost structure for physical hardware and virtual functions. The assumptions surrounding the costs of the hardware has been reviewed with main x86 based hardware vendors. The VNFs pricing relies on discussions with large and emerging telecom equipment vendors for standard VNFs such as EPC, IMS, encoding, load balancers, DPI… price structure. Traditional telco professional services, maintenance and support costs are apportioned and included in the calculations.

The overall assumption is that MEC will become part of the fabric of 5G networks and that MEC equipment will cover up to 20% of a network (coverage or population) when fully deployed.
The report features total addressable market, cumulative and incremental for MEC equipment vendors and integrator, broken down by CAPEX / OPEX, consumer, enterprises and IoT services.
It then provides a review of operators opportunities and revenue model for each segment.


Wednesday, April 27, 2016

NFV costs expectation gap

I am fresh off from an interesting week in sunny San Jose, at the NFV World Congress, where I chaired the operations stream on the first day.

As usual, it is a week where operators and vendors jostle to show off their progress since last year and highlight the challenges ahead. before I speak about the new and cool developments in terms of stateless VNFs, open source orchestration, containers, kubernetes and unikernels, I felt the need to share some observations regarding diverging expectations from traditional telecoms vendors, VNF vendors, systems integrators and operators.

While a large part of the presentations showed a renewed focus on operations in NFV, a picture started to emerge in my mind in terms of expectations between vendors, systems integrators and operators at the show.

Hardware
Essentially, everyone expects that the hardware bill for a virtualized network will reduce, due to the transition to x86 hardware. While this transition might mean less efficiency in the short term, all players seem to think that it will resolve itself over the next few years. In the meantime, DPDK and SR-IOV are used to address the performance gap between virtualization and traditional appliance, even at the cost of agility. By my estimate, the hardware cost reduction demonstrated by VNF vendors and systems integrators still falls short of operators expectations. Current figure places them around a 30% cost reduction vs. traditional model, whereas operators' expectations hover between 50 to 66%.

Software
This is an area where we see sharp expectations variations between all actors in the value chain.
VNF vendors expect to be able to somehow capture some of the hardware savings and translate them into additional license fees. This thinking is boosted by the need for internal business case to transition from appliance to software, to virtualized and eventually to orchestrated VNF. We are still very early in the market and software licensing models for VNFs are all over the place, in many case simply translated from the appliance model in other cases built from scratch but with little understanding of he value of specific functions in the overall service chain. Increased competition and market entering from non-traditional telco vendors will level the licensing structure over time.

Systems integrators are increasingly looking at VNFs as disposable. Operators tell them that they want to be able to have little dependency on vendors and to replace VNFs and vendors as needed, even running different vendors for the same function in different settings or slices. Systems integrator are buying into the rationale and are privileging their own VNFs, putting emphasis (and price premium) on their NFVI (infrastructure) and VNFM (management). Of course this leads also to the conclusion that while VNFs (and VNF vendors) should be interchangeable, the NFV MANO (management and orchestration) function will be very sticky and will likely stay a single vendor proposition in a given network. As a result, some are predicted the era of orchestrators war, which certainly feels timely, after the SDN management war (winner OpenStack), southbound interface war (winner OpenFlow), hypervisor war winner (KVM)...
I have spoken at length about the danger operators expose themselves if they vacate the orchestration field and leave systems integrators to rule it. It seems to have gained some traction with open source orchestration projects being pushed in standards. In any case, VNF vendors expect a growth in software licensing vs. appliance model, whereas integrators and operators expect a reduction.

Professional services
This is the area where everyone sees to agrees that an increase is inevitable. SDN and NFV provide layers upon layers of abstraction and while standards and open source are not fully defined, there is much integration and "enhancements" necessary to make a service on NFV work.
VNF vendors and operators who do not want to perform integration themselves usually expect a 50% increase vs. appliance projects, whereas integrators budget a robust 100% increase in average. This, of course, increases even further if the integrator is managing the infrastructure / service itself.

Maintenance and support
Vendors and integrators expect the ratio of these to be essentially comparable  to appliance models, whereas operators expect a sharp reduction, in light of all professional services being extended for integration and automation.

Total
VNF vendors behind closed doors will usually admit that, in the short term, the cost of rolling out a new VNF function /service might be a little higher than appliance, due to the performance gap and increase in professional services. There are sharp variations between traditional vendors that are porting their solutions to NFV and new vendors that cloud-native and have designed their solution for a software defined virtualized environment.
Systems integrator can show an overall cost reduction but usually because of proprietary "enhancements and optimization".
All are confident, though that automation and orchestration makes operation of existing services much cheaper and ramping up of new ones much faster. Expectations are that VNF architecture will be much more cost effective than appliance on a 3 to 5 years TCO model. Operators, on their end expect a NFV architecture to yield savings from day one, compared to appliance and to further increase this gap over a 3 years period.

Monday, April 25, 2016

Mobile Edge Computing 2016 is released!



5G networks will bring extreme data speed and ultra low latency to enable Internet of Things, autonomous vehicles, augmented, mixed and virtual reality and countless new services.

Mobile Edge Computing is an important technology that will enable and accelerate key use cases while creating a collaborative framework for content providers, content delivery networks and network operators. 

Learn how mobile operators, CDNs, OTTs and vendors are redefining cellular access and services.

Mobile Edge Computing is a new ETSI standard that uses latest virtualization, small cell, SDN and NFV principles to push network functions, services and content all the way to the edge of the mobile network. 


This 70 pages report reviews in detail what Mobile Edge Computing is, who the main actors are and how this potential multi billion dollar technology can change how OTTs, operators, enterprises and machines can enable innovative and enhanced services.

Providing an in-depth analysis of the technology, the architecture, the vendors's strategies and 17 use cases, this first industry report outlines the technology potential and addressable market from a vendor, service provider and operator's perspective.

Table of contents, executive summary can be downloaded here.

Tuesday, April 19, 2016

Net neutrality, meet lawful interception

This post is written today from the NFV World Congress where I am chairing the first day track on operations. Many presentations in the pre-show workshop day point to an increased effort from standards bodies (ETSI, 3GPP..) and open source organizations (OpenStack, OpenDaylight...) to address security by design in next generations networks architecture.
Law enforcement agencies are increasingly invited to contribute or advise to the standardization work to ensure their needs are baked into the design of these networks. Unfortunately, it seems that there is a large gap between lawful agencies requirements, standards and regulatory bodies. Many of the trends we are observing in mobile networks, from software defined networking to network functions virtualization and 5G assume that operators will be able to intelligently route traffic and apportion resources elastically. Lawful interception regulations mandate that operators, upon a lawful request, may provide means to monitor, intercept, transcribe any electronic communication to security agencies.

It has been hard to escape the headlines, lately when it comes to mobile networks, law enforcement and privacy. On one hand, privacy is an inalienable right that we should all be entitled to, on the other hand, we elect governments with the expectation that they will be able to protect us from harm, physical or digital. 

Digital harm, until recently, was mostly illustrated by misrepresentation, scams or identity theft. Increasingly, though, it translates into the physical world, as attacks can impact not only one's reputation, credit rating but also one's job, banking and soon cars, and connected devices.

I have written at length about the erroneous assumptions that are underlying many of the discourses of net neutrality advocates. 
In order to understand net neutrality and traffic management, one has to understand the different perspectives involved.
  • Network operators compete against each other on price, coverage and more importantly network quality. In many cases, they have identified that improving or maintaining quality of Experience is the single most important success factor for acquiring and retaining customers. We have seen it time and again with voice services (call drops, voice quality…), messaging (texting capacity, reliability…) and data services (video start, stalls, page loading time…). These KPI are the heart of the operator’s business. As a result, operators tend to either try to improve or control user experience by deploying an array of traffic management functions, etc...
  • Content providers assume that highest quality of content (HD for video for instance) equals maximum experience for subscriber and therefore try and capture as much network resource as possible to deliver it. Browser / apps / phone manufacturers also assume that more speed equals better user experience, therefore try to commandeer as much capacity as possible. A reaction to operators trying to perform traffic management functions is to encrypt traffic to obfuscate it. 
The flaw here is the assumption that the optimum is the product of many maxima self-regulated by an equal and fair apportioning of resources. This shows a complete ignorance of how networks are designed, how they operate and how traffic flows through these networks.

This behavior leads to a network where resources can be in contention and all end-points vie for priority and maximum resource allocation. From this perspective one can understand that there is no such thing as "net neutrality" at least not in wireless networks. 

When network resources are over-subscribed, decisions are taken as to who gets more capacity, priority, speed... The question becomes who should be in position to make these decisions. Right now, the laissez-faire approach to net neutrality means that the network is not managed, it is subjected to traffic. When in contention, resources are managing traffic based on obscure rules in load balancers, routers, base stations, traffic management engines... This approach is the result of lazy, surface thinking. Net neutrality should be the opposite of non-intervention. Its rules should be applied equally to networks, devices / apps/browsers and content providers if what we want to enable is fair and equal access to resources.

Now, who said access to wireless should be fair and equal? Unless the networks are nationalized and become government assets, I do not see why private companies, in a competitive market couldn't manage their resources in order to optimize their utilization.


If we transport ourselves in a world where all traffic becomes encrypted overnight, networks lose the ability to manage traffic beyond allowing / stopping and fixing high level QoS metrics to specific services. That would lead to network operators being forced to charge exclusively for traffic tonnage. At this point, everyone has to pay per byte transmitted. The cost to users would become prohibitive as more and more video of higher resolution flow through the networks. It would mean also that these video providers could asphyxiate the other services... More importantly, it would mean that the user experience would become the fruit of the fight between content providers' ability to monopolize network capacity, which would go again any net neutrality's principles. A couple of content providers could dominate not only service but the access to these service as well.

The problem is that encryption makes most traffic management and lawful interception provisions extremely unlikely or at the least very inefficient. Privacy is an important facet of net neutrality's advocates' discourse. It is indeed the main reason many content and service providers are invoking for encrypting traffic. In many case, this might be a true concern, but it is hard to reconcile that with the fact that many provide encryption keys and certificates to third party networks or CDNs for instance to improve caching ratios, perform edge packaging or advertising insertion. There is nothing that would prevent this model to be extended to wireless networks to perform similar operations. Commercial interest has so far prevented these types of models to emerge.

If encryption continues to grow, and service providers deny to operators the capability to decrypt traffic, the traditional burden of lawful interception might be transferred to the former. Since many providers are transnational, what is defined as lawful interception is unlikely to be unenforceable. At this stage we might have to choose, as societies between digital security or privacy.
In all likeliness, though, one can hope that regulatory bodies will up their technical game and understand the nature of digital traffic in the 21st century. This should lead to lawful interception mandate being applicable equally to all parts of the delivery chain, which will force collaborative behavior between the actors. 

Monday, April 4, 2016

MEC 2016 Executive Summary

2016 sees a sea change in the fabric of the mobile value chain. Google is reporting that mobile search revenue now exceed desktop, whereas 47% of Facebook members are now exclusively on mobile, which generates 78% of the company’s revenue. It has taken time, but most OTT services that were initially geared towards the internet are rapidly transitioning towards mobile.

The impact is still to be felt across the value chain.

OTT providers have a fundamentally different view of services and value different things than mobile network operators. While mobile networks have been built on the premises of coverage, reliability and ubiquitous access to metered network-based services, OTT rely on free, freemium, ad-sponsored or subscription based services where fast access and speed are paramount. Increase in latency impacts page load, search time and can cost OTTs billions in revenue.

The reconciliation of these views and the emergence of a new coherent business model will be painful but necessary and will lead to new network architectures.

Traditional mobile networks were originally designed to deliver content and services that were hosted on the network itself. The first mobile data applications (WAP, multimedia messaging…) were deployed in the core network, as a means to be both as close as possible to the user but also centralized to avoid replication and synchronization issues.
3G and 4G Networks still bear the design associated with this antiquated distribution model. As technology and user behaviours have evolved, a large majority of content and services accessed on cellular networks today originate outside the mobile network. Although content is now stored and accessed from clouds, caches CDNs and the internet, a mobile user still has to go through the internet, the core network, the backhaul and the radio network to get to it. Each of these steps sees a substantial decrease in throughput capacity, from 100's of Gbps down to Mbps or less. Additionally, each hop adds latency to the process. This is why networks continue to invest in increasing throughput and capacity. Streaming a large video or downloading a large file from a cloud or the internet is a little bit like trying to suck ice cream with a 3-foot bending straw.

Throughput and capacity seem to be certainly tremendously growing with the promises of 5G networks, but latency remains an issue. Reducing latency requires reducing distance between the consumer and where content and services are served. CDNs and commercial specialized caches (Google, Netflix…) have been helping reduce latency in fixed networks, by caching content as close as possible to where it is consumed with the propagation and synchronization of content across Points of Presence (PoPs). Mobile networks’ equivalent of PoPs are the eNodeB, RNC or cell aggregation points. These network elements, part of the Radio Access Network (RAN) are highly proprietary purpose-built platforms to route and manage mobile radio traffic. Topologically, they are the closest elements mobile users interact with when they are accessing mobile content. Positioning content and services there, right at the edge of the network would certainly substantially reduce latency.
For the first time, there is an opportunity for network operators to offer OTTs what they will value most: ultra-low latency, which will translate into a premium user experience and increased revenue. This will come at a cost, as physical and virtual real estate at the edge of the network will be scarce. Net neutrality will not work at the scale of an eNodeB, as commercial law will dictate the few applications and services providers that will be able to pre-position their content.

Mobile Edge Computing provides the ability to deploy commercial-off-the-shelf (COTS) IT systems right at the edge of the cellular network, enabling ultra-low latency, geo-targeted delivery of innovative content and services. More importantly, MEC is designed to create a unique competitive advantage for network operators derived from their best assets, the network and the customers’ behaviour. This report reviews the opportunity and timeframe associated with the emergence of this nascent technology and its potential impact on mobile networks and the mobile value chain.

Friday, March 18, 2016

For or against Adaptive Bit Rate? part V: centralized control

I have seen over the last few weeks much speculations and claims with T-Mobile's Binge On service launch and these have accelerated with yesterday's announcement of Google play and YouTube joining the service. As usual many are getting on their net neutrality battle horse using fraught assumptions and misconceptions to reject the initiative.

I have written at length about what ABR is and what are its pros and cons, you can find some extracts in the links at the end of this post. I'll try here to share my views and expose some facts to enable a more pragmatic approach.

I think we can safely assume that every actor in the mobile video delivery chain wants to enable the best user experience for users, whenever possible.
As I have written in the past, in the current state of affair, adaptive bit rate is often times corrupted in order to seize as much network bandwidth as possible, which results in devices and service providers aggressively competing for bits and bytes.
Content providers assume that highest quality of content (1080p HD video for instance) equals maximum experience for subscriber and therefore try and capture as much network resource as possible to deliver it. Browser / apps / phone manufacturers also assume that more speed equals better user experience, therefore try to commandeer as much capacity as possible. The flaw here is the assumption that the optimum is the product of many maxima self regulated by an equal and fair apportioning of resources. This shows a complete ignorance of how networks are designed, how they operate and how traffic flows through these networks.

An OTT cannot know why a user’s session downstream speed is degrading, it can just report it. Knowing why is important because it enables to make better decisions in term of the possible corrective actions that need to be undertaken to preserve the user’s experience. For instance, a reduction of bandwidth for a particular user can be the result of handover (4G to 3G or cells with different capacity), or because of congestion in a given cell or due to the distance between the phone and the antenna or whether a user enters a building, an elevator, or whether she is reaching her data cap and being throttled etc.… Reasons can be multiple and for each of them, a corrective action can have a positive or a negative effect on the user’s experience. For instance, in a video streaming scenario, you can have a group of people in a given cell streaming Netflix and others streaming YouTube. Naturally, the video streamed is in progressive download adaptive bit rate format, which means that the stream will try to increase to the highest available download bit rate to deliver the highest video definition possible. All sessions will theoretically increase the delivered definition up to the highest available or the highest delivery bit rate available, whichever comes first. In a network with much capacity, everyone ramps up to 1080p and everyone has a great user experience.

More often than not, though, that particular cell cannot accommodate everyone’s stream at the highest definition at the same time. Adaptive bit rate is supposed to help there again by stepping down definition until it fits within available delivery bit rate. It unfortunately can’t work like that when we are looking at multiple sessions from multiple OTTs. Specifically, as soon as one player starts reducing its definition to meet lower bit rate delivery, that freed-up bandwidth is grabbed by other players, which can now look at increasing even more their definition. There is no incentive for content provider to reduce bandwidth fast to follow network condition, because they can become starved by their competition in the same cell.

The solution here is simple, the delivery of ABR video content has to be managed and coordinated between all providers. The only way and place to provide this coordination is in the mobile network, as close to the radio resource as possible. [...]

This and more in my upcoming Mobile Edge Computing report.


Part I:What is ABR?
Part II: For ABR
Part III:Why isn't ABR more succesful
Part IV: alernatives

Tuesday, March 15, 2016

Mobile QoE White Paper




Extracted from the white paper "Mobile Networks QoE" commissioned by Accedian Networks. 

2016 is an interesting year in mobile networks.  Maybe for the first time, we are seeing tangible signs of evolution from digital services to mobile-first. As it was the case for the transition from traditional services to digital, this evolution causes disruptions and new behavior patterns in the ecosystem, from users to networks, to service providers.
Take for example social networks. 47% of Facebook users access the service exclusively through mobile and generate 78% of the company’s ad revenue. In video streaming services, YouTube sees 50% of its views on mobile devices and 49% Netflix’ 18 to 34 years old demographics watch it on mobile.
This extraordinary change in behavior causes unabated traffic growth on mobile networks as well a changes in the traffic mix. Video becomes the dominant use that pervades every other aspect of the network. Indeed, all involved in the mobile value chain have identified video services as the most promising revenue opportunity for next generation networks. Video services are rapidly becoming the new gold rush.


“Video services are the new gold rush”
Video is essentially a very different animal from voice or even other data services. While voice, messaging and data traffic can essentially be predicted fairly accurately as a function of number and density of subscribers, time of day and busy hour patterns, video follows a less predictable growth. There is a wide disparity in consumption from one user to the other, and this is not only due to their viewing habits. It is also function of their device screen size and resolution, the network that they are using and the video services they access. The same video, viewed on a social sharing site on a small screen or on full HD or at 4K on a large screen can have a 10 -20x impact on the network, for essentially the same service.


Video requires specialized equipment to manage and guarantee its quality in the network, otherwise, when congestion occurs, there is a risk that it consumes resources effectively denying voice, browsing, email and other services fair (and necessary) access to the network.
This unpredictable traffic growth results in exponential costs for networks to serve the demand.
As mobile becomes the preferred medium to consume digital content and services, Mobile Network Operators (MNOs), whose revenue was traditionally derived from selling “transport,” see their share squeezed as subscribers increasingly value content and have more and more options in accessing it. The double effect of the MNOs’ decreasing margins and increasing costs forces them to rethink their network architecture.
New services, on the horizon such as Voice and Video over LTE (VoLTE & ViLTE), augmented and virtual reality, wearable and IoT, automotive and M2M will not be achievable technologically or economically with the current networks.

Any architecture shift must not simply increase capacity; it must also improve the user experience. It must give the MNO granular control over how services are created, delivered, monitored, and optimized. It must make best use of capacity in each situation, to put the network at the service of the subscriber. It must make QoE — the single biggest differentiator within their control — the foundation for network control, revenue growth and subscriber loyalty.
By offering exceptional user experience, MNOs can become the access provider of choice, part of their users continuously connected lives as their trusted curator of apps, real-time communications, and video.


“How to build massively scalable networks while guaranteeing Quality of Experience?”

As a result, the mobile industry has embarked on a journey to design tomorrow’s networks, borrowing heavily from the changes that have revolutionized enterprise IT departments with SDN (Software Defined Networking) and innovating with 5G and NFV (Networks Functions Virtualization) for instance. The target is to emulate some of the essential attributes of innovative service providers such as Facebook, Google and Netflix who have had to innovate and solve some of the very same problems.


QoE is rapidly becoming the major battlefield upon which network operators and content providers will differentiate and win consumers’ trust.  Quality of Experience requires a richly instrumented network, with feedback telemetry woven through its fabric to anticipate, detect, measure any potential failure.

Tuesday, March 8, 2016

Standards approach or Open Source?


[...] Over the last few years, wireless networks have started to adopt enterprise technologies and trends. One of these trends is the open source collaborative model, where, instead of creating a set of documents to standardize a technology and leave vendors to implement their interpretation, a collective of vendors, operators and independent developers create source code that can be augmented by all participants.

Originally started with the Linux operating system, the open source development model allows anyone to contribute, use, and modify source code that has been released by the community for free.

The idea is that a meritocratic model emerges, where feature development and overall technology direction are the result of the community’s interest. Developer and companies gain influence by contributing, in the form of source code, blueprints, documentation, code review and bug fixes.

This model has proven beneficial in many case for the creation of large software environments ranging from operating system (Linux), HTTP servers (Apache) or big data (Hadoop) that have been adapted by many vendors and operators for their benefit.

The model provides the capacity for the creation and adoption of new technologies without having necessarily a large in-house developer group in a cost effective manner.
On the other hand, many companies find that the best-effort collaborative environment is not necessarily the most efficient model when the group of contributors come from very different background and business verticals.

While generic server operating system, database technology or HTTP servers have progressed rapidly and efficiently from the open source model, it is mostly due to the fact that these are building block elements designed to do only a fairly limited set of things.

SDN and NFV are fairly early in their development for mobile networks but one can already see that the level of complexity and specificity of the mobile environment does not lend itself easily to the adoption of generic IT technology without heavy customization.

In 2016, open source has become a very trendy buzzword in wireless but the reality shows that the ecosystem is still trying to understand and harness the model for its purposes. Wireless network operators have been used to collaborating in fairly rigid and orthodox environments such as ETSI and 3GPP. These standardization bodies have been derided lately as slow and creating sets of documentations that were ineffective but they have been responsible for the roll out of 4 generations of wireless networks and the interoperability of billions of devices, in hundreds of networks with thousands of vendors.

Open source is seen by many as a means to accelerate technology invention with its rapid iteration process and its low documentation footprint. Additionally, it produces actual code, that is pre tested and integrated, leaving little space for ambiguity as to its intent or performance. It creates a very handy level playing field to start building new products and services.

The problem, though is that many operators and vendors still treat open source in wireless as they did the standards, expecting a handful of contributing companies to do the heavy lifting of the strategy, design and coding and placing change requests and reviews after the fact. This strategy is unlikely to succeed, though. The companies and developers involved in open source coding are in for their benefit. Of course they are glad to contribute to a greater ecosystem by creating a common denominator layer of functional capabilities, but they are busy in parallel augmenting the mainline code with their customization and enhancements to market their products and services.


One of the additional issues with open source in wireless for SDN and NFV is that there is actually very little that is designed specifically for wireless. SDN, OpenStack, VMWare, OpenFlow… are mostly defined for general IT and you are more likely to find an insurance a bank or a media company at OpenStack forums than a wireless operator. The consequence is that while network operators can benefit from implementation of SDN or OpenStack in their wireless networks, the technology has not been designed for telco grade applicability and the chance of it evolving this way are slim without a critical mass of wireless oriented contributors. Huawei, ALU, Ericsson are all very present in these forums and are indeed contributing greatly but I would not rely on them too heavily to introduce the features necessary to ensure vendor agnosticism...

The point here is that being only a customer of open source code is not going to result in the creation of any added value without actual development. Mobile network operators and vendors that are on the fence regarding open source movements need to understand that this is not a spectator sport and active involvement is necessary if they want to derive differentiation over time.

Tuesday, March 1, 2016

Mobile World Congress 16 hype curve

Mobile World Congress 2016 was an interesting show in many aspects. Here are some of my views on most and least hyped subjects, including mobile video, NFV, SDN, IoT, M2M, augmented and virtual reality, TCP optimization, VoLTE and others

First, let start with mobile video, my pet subject, as some of you might know. 2016 sees half of Facebook users to be exclusively mobile, generating over 3/4 of the company's revenue while half of YouTube views are on mobile devices and nearly half of Netflix under 34 members watch from a mobile device. There is mobile and mobile, though and a good 2/3 of these views occur on wifi. Still, internet video service providers see themselves becoming mobile companies faster than they thought. The result is increased pressure on mobile networks to provide fast, reliable video services, as 2k, 4K, 360 degrees video, augmented and virtual reality are next on the list of services to appear. This continues to create distortions to the value chain as encryption, ad blocking, privacy, security, net neutrality, traffic pacing and prioritization are being used as weapons of slow attrition by traditional and new content and service providers. On the network operators' side, many have deserted the video monetization battlefield. T-Mobile's Binge On seems to give MNOs pause for reflection on alternative models for video services cooperation. TCP optimization has been running hot as a technology for the last 18 months and has seen Teclo Networks acquired by Sandvine on the heels of this year's congress.

Certainly, I have felt that we have seen a change of pace and tone in many announcements, with NFV hyperbolic claims subsiding somewhat compared to last year. Specifically, we have seen several vendors live deployments, but mostly revolving around launches of VoLTE, virtualized EPC for MVNOs, enterprise or verticals and ubiquitous virtualized CPE but still little in term of multi-vendor generic traffic NFV deployments at scale. Talking about VoLTE, I now have several anecdotal evidence from Europe, Asia and North America that the services commercially launched are well below expectation in term of quality an performance against circuit switched voice.
The lack of maturity of standards for Orchestration is certainly the chief culprit here, hindering progress for open multi vendor service automation. 
Proof can be found in the flurry of vendors "ecosystems". If everyone works so hard to be in one and each have their own, it underlines the market fragmentation rather than reduces it. 
An interesting announcement showed Telefonica, BT, Korea Telecom, Telekom Austria, SK, Sprint,  and several vendors taking a sheet from OPNFV's playbook and creating probably one of the first open-source project within ETSI, aimed at delivering a MANO collaborative project,.
I have been advocating for such a project for more than 18 months, so I certainly welcome the initiative, even if ETSI might not feel like the most natural place for an open source project. 

Overall, NFV feels more mature, but still very much disconnected from reality. A solution looking for problems to solve, with little in term of new services creation. If all the hoopla leads to cloud-based VPNs, VoLTE and cheaper packet core infrastructure, the business case remains fragile.

The SDN announcements were somewhat muted, but showing good progress in SD-WAN, and SD data center architecture with the recognition, at last, that specialized switches will likely still be necessary in the short to medium term if we want high performance software defined fabric - even if it impacts agility. The compromises are sign of market maturing, not a failure to deliver on the vendors part in my opinion.

IoT, M2M were still ubiquitous and vague, depicted alternatively as next big thing or already here. The market fragmentation in term of standards, technology, use cases and understanding leads to baseless fantasist claims from many vendors (and operators) on the future of wearable, autonomous transports, connected objects... with little in term of evidence of a coherent ecosystem formation. It is likely that a dominant player will emerge and provide a top-down approach, but the business case seems to hinge on killer-apps that hint a next generation networks to be fulfilled.

5G was on many vendors' lips as well, even if it seems to consistently mean different things to different people, including MIMO, beam forming, virtualized RAN... What was clear, from my perspective was that operators were ready at last to address latency (as opposed or in complement of bandwidth) as a key resource and attribute to discriminate services and associated network slices.

Big Data slid right down the hype curve this year, with very little in term of  announcement or even reference in vendors product launches or deployments. It now seems granted that any piece of network equipment, physical or virtual must generate rivulets that stream to rivers and data lakes, to be avidly aggregated, correlated by machine learning algorithms to provide actionable insights in the form of analytics and alerts. Vendors show progress in reporting, but true multi vendors holistic analytics remains extremely difficult, due to the fragmentation of vendors data attributes and the necessity to have both data scientists and subject matter experts working together to discriminate actionable insights from false positives.

On the services side, augmented and virtual reality were revving up to the next hype phase with a multitude of attendees walking blindly with googles and smartphones stuck to their face... not the smartest look and unlikely to pass novelty stage until integrated in less obtrusive displays. On the AR front, convincing use cases start to emerge, such as furniture shopping (whereas you can see and position furniture in your home by superimposing them from a catalogue app), that are pragmatic and useful without being too cumbersome. Anyone who had to shop for furniture and send it back because it did not fit or the color wasn't really the same as the room will understand. 
Ad blocking certainly became a subject of increased interest, as operators and service providers are still struggling for dominance. As encrypted data traffic increases, operators start to explore ways to provide services that users see as valuable and if they hurt some of the OTTs business models, it is certainly an additional bargaining chip. The melding and reforming of the mobile value chain continues and accelerates with increased competition, collaboration and coopetition as MNOs and OTTs are finding a settling position. I have recently ranted about what's wrong with the mobile value chain, so I will spare you here.

At last, my personal interest project this year revolves around Mobile Edge Computing. I have started production on a report on the subject. I think the technology has potential unlock many new services in mobile networks and I can't wait to tell you more about it. Stay tuned for more!