Thursday, May 29, 2014

NFV & SDN part III: mobile video

I have spent the last couple of months with some of the most brilliant technologists and strategists working on the latest networking technologies, standards and code.
Cloud, SDN, NFV, OpenStack, network virtualization, opendaylight, orchestration...

Everyone looks at making networks more programmable, agile, elastic, intelligent. Some of the sought benefits are faster time to market for new services, lower cost of operation, new revenue from new services, simpler network operation and service orchestration... This is very much about making IT more flexible and cost efficient.

Telcos, wireless vendors and operators are gravitating towards these organizations, hoping to benefit from these progress and implement them in wireless networks.

Here is what I don't quite get:
Mobile is the fastest growing ICT in the world (30% CAGR). Video is the largest (>50% of data volume) and fastest growing service in mobile (75% CAGR). 

Little, if any, of the working groups or organizations I have followed so far have dedicated telco (let alone wireless) working groups and none seem to address the need for next generation video delivery networks.
I am not half as smart as many of the engineers, technologist and strategist contributing to these organizations so I am missing something. Granted, in most cases, these efforts are fairly recent, maybe they haven't gotten to video services yet? It strikes me, though that no one speaks of creating better mobile video networks.

If wireless video is the largest, fastest growing consumer service in the world, shouldn't we, as an industry, look at improving it? A week doesn't go by where a study shows that wireless video streaming demand is increasing and that quality of experience is insufficient.

I am afraid that, as an industry, we are confusing means and goals. Creating better generic networks, using more generic hardware, interfaces and protocols to reduce costs of operation and simplify administration is a noble ambition, but it does not in itself guarantee cost reduction and even less new services. What I have seen so far are more complex network topology with layer upon layer of hierarchical abstraction sure to keep specialized vendors busy and rich for the decades to come.

In parallel, we are seeing opposite moves made by the like of Google, Netflix, Apple, or Facebook. When it comes to launching new services, it doesn't feel that these companies are looking first at network architecture, costs savings, service orchestration, interfaces... I am sure that it gets addressed at some point in the process, but it looks like it starts with the customer. What is the value proposition, what is the service, what is the experience, how will it be charged, who will pay...

Comparing these two processes might be unfair, I agree, but if you are a mobile network operator today, shouldn't you focus your energy on what is the largest and fastest growing service on your network, which happens to not be profitable? 
85% of the video traffic is OTT and you get little revenue from that. You are struggling to deliver an acceptable video quality for a service that is growing and uses already the majority of your resources and you have no plan to improve it. 
Why aren't we looking as an industry at creating a better wireless video network? Start from there and look at what could be the best architecture, interfaces, protocols... I bet the result could be different from our current endeavors. 
None of the above mentioned technology have been designed specifically for video. Of course it is generic networking, so video can be part of it, but I doubt it will be able to deliver the best mobile video experience if not baked-in at the design and architectural phase. Then, if these are not the venue for it, what is?

I am not advocating against SDN, NFV, OpenStack, etc... but I would hope that sooner rather than later, wireless and video specific focus are brought to bear in these organisations. It wouldn't feel right if we found out down the line that we created a great networking framework that is great for IT enterprise but not so good for the most important consumer service. Just saying... 

Thursday, May 15, 2014

NFV & SDN Part II: Clouds & Openstack

I just came back from the OpenStack Summit taking place in Atlanta this week. In my quest to understand better SDN, NFV and Cloud maturity for mobile networks and video delivery, it is an unavoidable step. As announced a couple of weeks ago, this is a new project for me and a new field of interest.
I will chronicle in this blog my progress (or lack thereof) and will use this tool to try and explain my understanding of the state of the technology and the market. 
I am not a scientist and am somewhat slow to grasp new concepts, so you will undoubtedly find much to correct here. I appreciate your gentle comments as I progress.

So... where do we start? Maybe a couple of definitions.
What is (are) the cloud(s)? Clouds are environments where software resources can be virtualized and allocated dynamically to instantiate, grow and shut down services.
Public clouds are made available by corporations to consumers and businesses in a commercial fashion. They are usually designed to satisfy a single need (Storage, Computing, Database...). 
The most successful examples can be Amazon Web Services, Google Drive, Apple iCloud, or DropBox. Pricing models are usually per hour rental of computing or database unit or per month rental of storage capacity. We will not address public clouds in this blog.
Private clouds are usually geo-dispersed capabilities federated and instantiated as one logical network capacity for a single company. We will focus here on the implementation of cloud technology in wireless networks. Typical use cases are simple data storage or development or testing sandbox.
Cloud technology relies on Openstack to abstract compute, storage and networking functions into logical elements and to manage heterogeneous virtualized environments. OpenStack is the Operating System of the cloud and it allows to instantiate Infrastructure or platform-as-a-service (respectively IAAS and PAAS).

The OpenStack program is also an open source community started by NASA and Rackspace, now independent and self governed. It essentially functions as a collaborative development community aimed at defining and releasing OpenStack software packages. 
After attending presentations and briefings from Deutsche Telecom, Ericsson, Dell, RedHat, Juniper, Verizon, Intel… I have drawn some very preliminary thoughts I would like to share here:
OpenStack is in its 9th release (IceHouse) and wireless interest is glaringly lacking. It has been setup primarily as an enterprise initiative and while enterprise and telecoms IT share many needs, wireless regulations tend to be much more stringent. CALEA (law enforcement), Sarbanes Oxley (accounting, traceability) are but a few of the provisions that would preclude OpenStack to run today in a commercial telco private cloud.
As presented by Verizon, Deutsche Telekom and other telcos at the summit, the current state of OpenStack does not allow it to be deployed "out of the box" without development and operations teams to patch, adapt and stabilize the system for telco purposes. These patches and tweaks have a negative impact on performance, scalability and latency, because they have not been taken into account at the design phase. They are workarounds rather than fixes. Case studies were presented, ranging from CDN video caching in a wireless infrastructure to generic sandbox for storage and software testing. The results show the lack of maturity of the technology to enable telco-grade services.
There are many companies that are increasingly investing in OpenStack, still I feel that a separate or focused telco working group must be created in its midst if we want it to reach telco-grade applicability.
More importantly, and maybe concerning is my belief that the commercial implementation of the technology requires a corresponding change in organizational setup and behaviour. Migrating to cloud and OpenStack is traditionally associated with the supposed benefits of increasing service roll out, reducing time to market, capex and opex as specialized telco appliance "transcend" to the cloud and are virtualized on off-the-shelf hardware.
There is no free lunch out there. The technology is currently immature, but as it evolves, we start to see that all these abstraction layers are going to require some very specialized skills to deploy, operate and maintain. these skills are very rare right now. Witness HP, Canonical, Intel, Ericsson all advertising "we are hiring" on their booths and during their presentations / keynotes. I have the feeling that operators who want to implement these technologies in the future will simply not have the internal skill set or capacity to roll them out. The large Systems Integrators might end up being the only winners there, ultimately reaping the cost benefits of a virtualized networks, while selling network-as-a-service to their customers.
Network operators might end up trading one vendor lock-in for another, much more sticky if their services run on a third party cloud. (I don't believe, we can realistically talk about service migration from cloud to cloud and vendor to vendor when 2 hypervisors supposedly running standard interfaces can't really coexist today in the same service).

Friday, May 2, 2014

NFV & SDN part I

In their eternal quest to reduce CAPEX, mobile network operators have been egging on telecom infrastructure manufacturers to adopt more open, cost effective computing capabilities.

You will remember close to 15 years ago when all telecom platforms had to be delivered on hardened SUN Solaris SPARC NEBS certified with full fledged Oracle database to be "telecom grade". Little by little, x86 platforms, MySQL databases and Linux OS have penetrated the ecosystem. It was originally a vendor-driven initiative to reduce their third party cost. The cost reduction was passed on to MNOs who were willing to risk implementing these new platforms. We have seen their implementation grow from  greenfield operators in emerging countries, to mature markets first at the periphery of the network, slowing making their way to business-critical infrastructure.

We are seeing today an analogous push to reduce costs further and ban proprietary hardware implementations with NFV. Pushed initially by operators, this initiative sees most network functions first transiting from hardware to software, then being run on virtualized environments on off-the-shelf hardware.

The first companies to embrace NFV have been "startup" like Affirmed Networks. First met with scepticism, the  company seems to have been able to design from scratch and deploy commercially a virtualized Evolved Packet Core in only 4 years. It certainly helps that the company was founded to the tune of over 100 millions dollars from big names such as T-Ventures and Vodafone, providing not only founding but presumably the lab capacity at their parent companies to test and fine tune the new technology.

Since then, vendors have started embracing the trend and are moving more or less enthusiastically towards virtualization of their offering. We have seen emerging different approaches, from the simple porting of their software to Xen or VMWare virtualized environments to more achieved openstack / openflow platforms.

I am actively investigating the field and I have to say some vendors' strategies are head scratching. In some cases, moving to a virtualized environment is counter-productive. Some telecom products are highly CPU intensive / specialized and require dedicated resource to attain high performance, scalability in a cost effective package. Deep packet inspection, video processing seem to be good examples. Even those vendors who have virtualized their appliance / solution when pushed will admit that virtualization will come at a performance cost at the state of the technology today.

I have been reading the specs (openflow, openstack) and I have to admit they seem far from the level of detail that we usually see in telco specs to be usable. A lot of abstraction, dedicated to redefining switching, not much in term of call flow, datagram, semantics, service definition, etc...

How the hell does one go about launching a service in a multivendor environment? Well, one doesn't. There is a reason why most NFV initiative are still at the plumbing level, investigating SDN, SDDC, etc... Or single vendor / single service approach. I haven't been convinced yet by anyone's implementation of multi vendor management, let alone "service orchestration". We are witnessing today islands of service virtualization in hybrid environments. We are still far from function virtualization per se.

The challenges are multiple: 
  • Which is better?: A dedicated platform with low footprint / power requirement that might be expensive and centralized or thousand of virtual instances occupying hundreds of servers that might be cheap (COTS) individually but collectively not very cost or power efficient?
  • Will network operator trade Capex for Opex when they need to manage thousand of applications running virtually on IT platforms? How will their personnel trained to troubleshoot problems following the traffic and signalling path will adapt to this fluid non-descript environment? 
We are still early in this game, but many vendors are starting to purposefully position themselves in this space to capture the next wave of revenue. 

Will the lack of a programmable multi vendor control environment force network operators to ultimately be virtualized themselves, relinquishing network management to the large IT and telecom equipment manufacturers? This is one of the questions I will attempt to answer going forward as I investigate in depth the state of the technology and compare it with the vendors and MNOs claims and assertions.
Stay tuned, more to come with a report on the technology, market trends and vendors capabilities in this space later on this year.