Showing posts with label openstack. Show all posts
Showing posts with label openstack. Show all posts

Wednesday, January 8, 2020

Open or open source?

For those who know me, you know that I have been a firm supporter of openness by design for a long time. It is important not to conflate openness and open source when it comes to telco strategy, though.

Most network operators believe that any iteration of their network elements must be fully interoperable within their internal ecosystem (their network) and their external ecosystem (other telco networks). This is fundamentally what allows any phone user to roam and use any mobile networks around the planet.
This need for interoperability has reinforced the importance of standards such as ETSI and 3GPP and forums such as GSMA over the last 20 years. This interoperability by design has led to the creation of rigid interfaces, protocols and datagrams that preside over how network elements should integrate and interface in a telco and IP network.
While this model has worked well for the purpose of creating a unified global aggregation of networks with 3G/4G, departing from the fragmentation of 2G (GSM, CDMA, TDMA, AMPS...), it has also somewhat slowed down and stifled the pace of innovation for network functions.

The last few years have seen an explosion of innovation in networks, stemming from the emergence of data centers, clouds, SDN and virtualization. The benefits have been incredible, ranging from departing from proprietary hardware dependency, increased multi tenancy, resource elasticity, traffic programmability, automation and ultimately the atomization of network functions into microservices. This allowed the creation of higher level network abstractions without the need for low level programming or coding (for more on this, read anything ever written by the excellent Simon Wardley). These benefits have been systematically developed and enjoyed by those companies that needed to scale their networks the fastest: the webscalers.

In the process, as the technologies underlying these new networks passed from prototype, to product, to service, to microservice, they have become commoditized. Many of these technologies, once close to maturity, have been open sourced, allowing a community of similarly interested developers to flourish and develop new products and services.

Telecom operators were inspired by this movement and decided that they needed as well to evolve their networks to something more akin to an elastic cloud in order to decorrelate traffic growth from cost. Unfortunately, the desire for interoperability and the lack of engineering development resources led operators to try to influence and drive the development of a telco open source ecosystem without really participating in it. NFV (Networks function Virtualization) and telco Openstack are good examples of great ideas with poor results.Let's examine why:

NFV was an attempt to separate hardware from software, and stimulate a new ecosystem of vendors to develop telco functions in a more digital fashion. Unfortunately, the design of NFV was a quasi literal transposition of appliance functions, with little influence from SDN or micro service architecture. More importantly, it relied on an orchestration function that was going to become the "app store" of the network. This orchestrator, to be really vendor agnostic would have to be fully interoperable with all vendors adhering to the standard and preferably expose open interfaces to allow interchangeability of the network functions, and orchestrator vendors. In practice, none of the traditional telecom equipment manufacturers had plans to integrate with a third party orchestrators and would try to deploy their own as a condition for deploying their network functions. Correctly identifying the strategic risk, the community of operators started two competing open source projects: Open Source Mano (OSM) and Open Network Automation Platform (ONAP).
Without entering into the technical details, both projects suffered at varying degree from a cardinal sin. Open source development is not a spectator sport. You do not create an ecosystem or a community of developer. You do not demand contribution, you earn it. The only way open source projects are successful is if their main sponsors actively contribute (code, not diagrams or specs) and if the code goes into production and its benefits are easily illustrated. In both cases, most operators have opted on relying heavily on third party to develop what they envisioned, with insufficient real life experience to ensure the results were up to the task. Only those who roll their sleeves and develop really benefit from the projects.

Openstack was, in comparison, already a successful ecosystem and open source development forum when telco operators tried to bend it to their purpose. It had been deployed in many industries, ranging from banking, insurances, transportation, manufacturing, etc... and had a large developer community. Operators thought that piggybacking on this community would accelerate development and of an OpenStack suited for telco operation. The first efforts were to introduce traditional telco requirements (high availability, geo redundancy, granular scalability...) into a model that was fundamentally a best effort IT cloud infrastructure management. As I wrote 6 years ago, OpenStack at that stage was ill-suited for the telco environment. And it remained so. Operators resisted hiring engineers and coding sufficient functions into OpenStack to make it telco grade, instead relying on their traditional telco vendors to do the heavy lifting for them.

The lessons here are simple.
If you want to build a network that is open by design, to ensure vendor independence, you need to manage the control layer yourself. In all likeliness, tring to specify it and asking others to build it for you will fail if you've never built one before yourself.
Open source can be a good starting point, if you want to iterate and learn fast, prototype and test, get smart enough to know what is mature, what can should be bought, what should be developed and where is differential value. Don't expect open source to be a means for others to do your labour. The only way you get more out of open source than you put in is a long term investment with real contribution, not just guidance and governance.

Tuesday, March 8, 2016

Standards approach or Open Source?


[...] Over the last few years, wireless networks have started to adopt enterprise technologies and trends. One of these trends is the open source collaborative model, where, instead of creating a set of documents to standardize a technology and leave vendors to implement their interpretation, a collective of vendors, operators and independent developers create source code that can be augmented by all participants.

Originally started with the Linux operating system, the open source development model allows anyone to contribute, use, and modify source code that has been released by the community for free.

The idea is that a meritocratic model emerges, where feature development and overall technology direction are the result of the community’s interest. Developer and companies gain influence by contributing, in the form of source code, blueprints, documentation, code review and bug fixes.

This model has proven beneficial in many case for the creation of large software environments ranging from operating system (Linux), HTTP servers (Apache) or big data (Hadoop) that have been adapted by many vendors and operators for their benefit.

The model provides the capacity for the creation and adoption of new technologies without having necessarily a large in-house developer group in a cost effective manner.
On the other hand, many companies find that the best-effort collaborative environment is not necessarily the most efficient model when the group of contributors come from very different background and business verticals.

While generic server operating system, database technology or HTTP servers have progressed rapidly and efficiently from the open source model, it is mostly due to the fact that these are building block elements designed to do only a fairly limited set of things.

SDN and NFV are fairly early in their development for mobile networks but one can already see that the level of complexity and specificity of the mobile environment does not lend itself easily to the adoption of generic IT technology without heavy customization.

In 2016, open source has become a very trendy buzzword in wireless but the reality shows that the ecosystem is still trying to understand and harness the model for its purposes. Wireless network operators have been used to collaborating in fairly rigid and orthodox environments such as ETSI and 3GPP. These standardization bodies have been derided lately as slow and creating sets of documentations that were ineffective but they have been responsible for the roll out of 4 generations of wireless networks and the interoperability of billions of devices, in hundreds of networks with thousands of vendors.

Open source is seen by many as a means to accelerate technology invention with its rapid iteration process and its low documentation footprint. Additionally, it produces actual code, that is pre tested and integrated, leaving little space for ambiguity as to its intent or performance. It creates a very handy level playing field to start building new products and services.

The problem, though is that many operators and vendors still treat open source in wireless as they did the standards, expecting a handful of contributing companies to do the heavy lifting of the strategy, design and coding and placing change requests and reviews after the fact. This strategy is unlikely to succeed, though. The companies and developers involved in open source coding are in for their benefit. Of course they are glad to contribute to a greater ecosystem by creating a common denominator layer of functional capabilities, but they are busy in parallel augmenting the mainline code with their customization and enhancements to market their products and services.


One of the additional issues with open source in wireless for SDN and NFV is that there is actually very little that is designed specifically for wireless. SDN, OpenStack, VMWare, OpenFlow… are mostly defined for general IT and you are more likely to find an insurance a bank or a media company at OpenStack forums than a wireless operator. The consequence is that while network operators can benefit from implementation of SDN or OpenStack in their wireless networks, the technology has not been designed for telco grade applicability and the chance of it evolving this way are slim without a critical mass of wireless oriented contributors. Huawei, ALU, Ericsson are all very present in these forums and are indeed contributing greatly but I would not rely on them too heavily to introduce the features necessary to ensure vendor agnosticism...

The point here is that being only a customer of open source code is not going to result in the creation of any added value without actual development. Mobile network operators and vendors that are on the fence regarding open source movements need to understand that this is not a spectator sport and active involvement is necessary if they want to derive differentiation over time.

Wednesday, August 12, 2015

The orchestrator conundrum in SDN and NFV

We have seen over the last year a flurry of activity around orchestration in SDN and NFV. As I have written about here and here, orchestration is a key element and will likely make or break SDN and NFV success in wireless.

A common mistake associated with orchestration is that it covers the same elements or objectives in SDN and NFV. It is a great issue, because while SDN orchestration is about resource and infrastructure management, NFV should be about service management. There is admittedly a level of overlap, particularly if you define services as both network and customer sets of rules and policies.

To simplify, here we'll say that SDN orchestration is about resource allocation, virtual, physical and mixed infrastructure auditing, insurance and management, while NFV's is about creating rules for traffic and service instantiation based on subscriber, media, origin, destination, etc...

The two orchestration models are complementary (it is harder to create and manage services if you do not have visibility / understanding of available resources and conversely, it can be more efficient to manage resource knowing what services run on them) but not necessarily well integrated. A bevy of standards and open source organizations (ETSI ISG NFV, OPNFV, MEF, Openstack, Opendaylight...) are busy trying to map one with another which is no easy task. SDN orchestration is well defined in term of its purview, less so in term of implementation, but a few models are available to experiment on. NFV is in its infancy, still defining what the elements of service orchestration are, their proposed interfaces with the infrastructure and the VNF and generally speaking how to create a model for service instantiation and management.

For those who have followed this blog and my clients who have attended my SDN and NFV in wireless workshop, it is well known that the management and orchestration (MANO) area is under intense scrutiny from many operators and vendors alike.
Increasingly, infrastructure vendors who are seeing the commoditization of their cash cow understand that the brain of tomorrow's network will be in MANO.
Think of MANO as the network's app store. It controls which apps (VNFs) are instantiated, what level of resource is necessary to manage them and stitch (service chaining) VNF together to create services.
The problem, is that MANO is not yet defined by ETSI, so anyone who wants to orchestrate VNFs today either is building its own or is stuck with the handful of vendors who are providing MANO-like engine. Since MANO is ill-defined, the integration requires a certain level of proprietary effort. Vendors will say that it is all based on open interfaces, but the reality is that there is no mechanism in the standard today for a VNF to declare its capabilities, its needs and its intent, so a MANO integration requires some level of abstraction or deep fine tuning,
As a result, MANO can become very sticky if deployed in an operator network. The VNFs can come and go and vendors can be swapped at will, but the MANO has the potential to be a great anchor point.
It is not a surprise therefore to see vendors investing heavily in this field or acquiring the capabilities:

  • Cisco acquired TailF in 2014
  • Ciena acquired Cyan this year
  • Cenx received 12,5m$ in funding this year...

At the same time, Telefonica has launched an open source collaborative effort called openMANO to stimulate the industry and reduce risks of verticalization of infrastructure / MANO vendors.

For more information on how SDN and NFV are implemented in wireless networks, vendors and operators strategies, look here.

Thursday, July 9, 2015

Announcing SDN / NFV in wireless 2015

On the heels of my presentation at the NFV world congress in San Diego this spring, my presentation and panels at LTE world summit on network visualization and my anticipated participation at SDN & OpenFlow world Summit in the fall, I am happy to announce production for "SDN / NFV in wireless networks 2015".

This report, to be released in September, will feature my review of the progress of SDN and NFV as technologies transitioning from PoC to commercial trials and limited deployments in wireless networks.



The report provides a step by step strategy for introducing SDN and NFV in your product and services development.


  • Drivers for SDN and NFV in telecom networks 
  • Public, private, hybrid, specialized clouds 
  • Review of SDN and NFV standards and open source initiatives
  • SDN 
    • Service chaining
    • Apache CloudStack, Microsoft Cloud OS, Red Hat, Citrix CloudPlatform, OpenStack,  VMWare vCloud, 
    • SDN controllers (OpenDaylight, ONOS) 
    • SDN protocols (OpenFlow, NETCONF, ForCES, YANG...)
  • NFV 
    • ETSI ISG NFV 
    • OPNFV 
    • OpenMANO 
    • NFVRG 
    • MEF LSO 
    • Hypervisors: VMWare vs. KVM, vs Containers
  • How does it all fit together? 
  • Core and RAN networks NFV roadmap
  • Operators strategy and deployments review: AT&T, China Unicom, Deutsche Telekom, EE, Telecom Italy, Telefonica, Verizon...
  • Vendors strategy and roadmap review: Affirmed networks, ALU, Cisco, Ericsson, F5, HP, Huawei, Intel, Juniper, Oracle, Red Hat... 
Can't wait for the report? Want more in-depth and personalized training? A 5 hours workshop and strategy session is available now to answer your specific questions and help you chart your product and services roadmap, while understanding your competitors' strategy and progress.

Tuesday, May 5, 2015

NFV world congress: thoughts on OPNFV and MANO

I am this week in sunny San Jose, California at the NFV World Congress where I will chair Thursday the stream on Policy and orchestration - NFV management.
My latest views on SDN / NFV implementation in wireless networks are published here.

The show started today with a mini-summit on OPNFV, looking at the organization's mission, roadmap and contribution to date.

The workshop was well-attended, with over 250 seats occupied and a good number of people standing in the back. On the purpose of OPNFV, it feels that the organization is still trying to find its mark a little bit, hesitating between being a transmission belt between ETSI NFV and open source implementation projects and graduating to a prescriptive set of blueprints for NFV implementations in wireless networks.

If you have trouble following, you are not the only one. I am quite confused myself. I thought OpenStack had a mandate to create source code for managing cloud network infrastructure and that NFV was looking at managing service in a virtualized fashion, which could sit on premises, clouds and hybrid environments. While NFV does not produce code, why do we need OPNFV for that?

Admittedly, the organization is not necessarily deterministic in its roadmap, but rather works on what its members feel is needed. As a result, it has decided that its first release, code-named ARNO will be supporting KVM as hypervisor environment and will feature an OpenStack architecture underpinned by an OpenDaylight-based SDN controller. ARNO should be released "this spring" and is limited in its scope as a first attempt to provide an example of a carrier-grade ETSI NFV-based source code for managing a SDN infrastructure. Right now, ARNO is focused on VIM (Virtual Infrastructure Management), and since the full MANO is not yet standardized and it is felt it is too big a chunk to look at for a first release, it will be part of a later requirement phase. The organization is advocating pushing requirements and bug resolution upstream (read to other open source communities) to make the whole SDN / NFV more "carrier-grade".

This is where, in my mind the reasoning breaks down. There is a contradiction in terms and intent here. On one hand, OPNFV advocates that there should not be separate branches within implementation projects such as OpenStack for instance for carrier specific requirements. Carrier-grade being the generic analogy to describe high availability, scalability and high performance. The rationale is that it could be beneficial to the whole OpenStack ecosystem. On the other hand, OPNFV seems to have been created to implement and test primarily NFV-based code for carrier environment. Why do we need OPNFV at all if we can push these requirements within OpenStack and ETSI NFV? The organization feels more like an attempt to supplement or even replace ETSI NFV by an opensource collaborative project that would be out of ETSI's hands.

More importantly, if you have been to OpenStack meeting, you know that you are probably twice as likely to meet people from the banking, insurance, media, automotive industry as from the telecommunications space. I have no doubt that theoretically, everyone would like more availability, scalability, performance, but practically, the specific needs of each enterprise segment rarely means they are willing to pay for over-engineered networks. Telco carrier-grade was born from regulatory pressure to provide a public infrastructure service, many enterprises wouldn't know what to do with the complications and constraints arising from these.

As a result, I personally have doubts for the success of the Telcos and forums such as OPNFV to influence larger groups such as OpenStack to deliver a "carrier-grade" architecture and implementation. I think that Telco operators and vendors are a little confused by open source. They essentially treat it as a standard, submitting change requests, requirements, gap analysis while not enough is done (by the operators community at least) to actually get their hands dirty and code. The examples of AT&T, Telefonica, Telecom Italia and some others are not in my mind reflective of the industry at large.

If ETSI were more effective, service orchestration in MANO would be the first agenda item, and plumbing such as VIM would be delegated to more advanced groups such as OpenStack. If a network has to become truly elastic, programmable, self reliant and agile, in a multi vendor environment, then MANO is the brain and it has to be defined and implemented by the operators themselves. Otherwise, we will see Huawei, Nokialcatelucent, Ericsson, HP and others become effectively the app store of the networks (last I checked, it did not work very well for operators when Apple and Android took control of that value chain...). Vendors have no real incentive to make orchestration open and to fulfill the vendor agnostic vision of NFV.


Monday, February 23, 2015

The future is cloudy: NFV 2020 part II

I have received some comments after my previous post arguing that maybe the future of SDN and NFV is not as far as I am predicting. As we are all basking in the pre Mobile World Congress excitement, inundated by announcements from vendors and operators alike trying to catch the limelight before the deafening week begins, I thought I would clarify some of my thoughts.

We have seen already this week some announcements of virtualization plans, products and even deployments.

One of the main problems with a revolutionary approach such SDN and/or NFV implementation is that it suggests a complete network overhaul to deliver its full benefits. In all likeliness, no network operator is able to operate fully these kind of changes in less than a 10 years' timescale, so what to do first?

The choice is difficult, since there are a few use cases that seem easy enough to roll out but deliver little short term benefits (vCPE, some routing and switching functions...) while the projects that should deliver the highest savings, the meaty ones, seem quite far from maturity (EPC, IMS, c-RAN...). Any investment on this front is going to be just that...an investment with little to no return in the short term.

The problem is particularly difficult to solve because most of the value associated with virtualization of mobile networks in the short term is supposedly ties to capex and opex savings. I have previously highlighted this trend and it is not abating, more like accelerating.
Islands of SDN or NFV implementations in a sea of legacy network elements is not going to generate much saving. It could arguably generate new revenue streams if these were used to launch new services, but today’s focus has been so far to emulate and translate physical function and networks into virtualized ones, with little effort in term of new service creation.

As a result, the business case to deploy SDN or NFV in a commercial network today is negative and likely to stay so for the next few years. I expect the momentum to continue, though, since it will have to work and to deliver the expected savings for network operators to stand a chance to stay in business.

The other side of this coin is the service offering.  While flexibility, time to market and capacity to launch new services are always quoted as some of the benefits of network virtualization, it seems that many operators have given up on innovation and service creation. The examples of new services are few and far between and I would hope that these would be the object of more focused efforts.

At last, it seems that maybe one of my previsions will be fulfilled shortly, a friend pointed out that this year's GSMA freebee for its member at the show will be... a selfie stick.

Monday, October 20, 2014

Report from SDN / NFV shows part I

Wow! last week was a busy week for everything SDN / NFV, particularly in wireless. My in-depth analysis of the segment is captured in my report. Here are a few thoughts on the last news.

First, as is now almost traditional, a third white paper was released by network operators on Network Functions Virtualizations. Notably, the original group of 13 who co-wrote the first manifesto that spurred the creation of ETSI ISG NFV has now grown to 30. The Industry Specification Group now counts 235 companies (including yours truly) and has seen 25 Proof of Concepts initiated. In short the white paper announces another 2 year term of effort beyond the initial timeframe. This new phase will focus on multi-vendor orchestration operability, and integration with legacy OSS/BSS functions.

MANO (orchestration) remains a point of contention and many start to recognise the growing threat and opportunity the function represents. Some operators (like Telefonica) seem actually to have reached the same conclusions as I in this blog and are starting to look deeply into what implementing MANO means for the ecosystem.

I will go today a step further. I believe that MANO in NFV has the potential to evolve the same way as the app stores in wireless. It is probably an apt comparison. Both are used to safekeep, reference, inventory, manage the propagation and lifecycle of software instances.

In both cases, the referencing of the apps/VNF  is a manual process, with arbitrary rules that can lead to dominant position if not caught early. It would be relatively easy, in this nascent market to have an orchestrator integrate as many VNFs as possible, with some "extensions" to lock-in this segment like Apple and Google did with mobiles.

I know, "Open" is the new "Organic", but for me, there is a clear need to maybe create an open source MANO project, lets call it "OpenHand"?

You can view below a mash-up of the presentations I gave at the show last week and the SDN & NFV USA in Dallas the week before below.



More notes on these past few weeks soon. Stay tuned.

Tuesday, September 9, 2014

SDN & NFV part VI: Operators, dirty your MANO!

While NFV in ETSI was initially started by network operators in their founding manifesto, in many instances, we see that although there is a strong desire to force telecoms appliance commoditization, there is little appetite by the operators to perform the sophisticated integration necessary for these new systems to work.

This is, for instance, reflected in MANO, where operators seem to have put back the onus on vendors to lead the effort. 

Some operators (Telefonica, AT&T, NTT…) seem to invest resources not only in monitoring the process but also in actual development of the technology, but by and large, according to my study,  MNOs seem to have taken a passenger seat to NFV implementations efforts. Many vendors note that MNOs tend to have a very hands off approach towards the PoCs they "participate" in, offering guidance, requirements or in some cases, just lending their name to the effort without "getting their hands dirty".

The Orchestrator’s task in NFV is to integrate with OSS/BSS and to manage the lifecycle of the VNFs and NFVI elements. 

It onboards new network services and VNFs and it performs service chaining in the sense that it decides through which VNF, in what order must the traffic go through according to routing rules and templates. 

These routing rules are called forwarding graphs. Additionally, the Orchestrator performs policy management between VNFs. Since all VNFs are proprietary, integrating them within a framework that allows their components to interact is a huge undertaking. MANO is probably the part of the specification that is the least mature today and requires the most work.


Since it is the brain of the framework, failure of MANO to reach a level of maturity enabling consensus between the participants of the ISG will inevitably relegate NFV to vertical implementations. This could lead to a network with a collection of vertically virtualized elements, each having their own MANO, or very high level API abstractions, reducing considerably overall system elasticity and programmability. SDN OpenStack-based models can be used for MANO orchestration of resources (Virtualized Infrastructure Manager) but offer little applicability in the pure orchestration and VNF management field beyond the simplest IP routing tasks.


Operators who are serious about NFV in wireless networks should seriously consider develop their own orchestrator or at the minimum implement strict orchestration guidelines. They could force vendors to adopt a minimum set of VNF abstraction templates for service chaining and policy management.

Thursday, June 26, 2014

LTE World Summit 2014

This year's 10th edition of the conference, seems to have found a new level of maturity. While VoLTE, RCS, IMS are still subjects of interest, we seem to be past the hype at last (see last year), with a more pragmatic outlook towards implementation and monetization. 

I was happy to see that most operators are now recognizing the importance of managing video experience for monetization. Du UAE's VP of Marketing, Vikram Chadha seems to get it:
"We are transitioning our pricing strategy from bundles and metering to services. We are introducing email, social media, enterprise packages and are looking at separating video from data as a LTE monetization strategy."
As a result, the keynotes were more prosaic than in the past editions, focusing on cost of spectrum acquisitions and regulatory pressure in the European Union preventing operators to mount any defensible position against the OTT assault on their networks. Much of the agenda of the show focused on pragmatic subjects such as roaming, pricing, policy management, heterogeneous networks and wifi/cellular handover. Nothing obviously earth shattering on these subjects, but steady progress, as the technologies transition from lab to commercial trials and deployment. 

As an example, there was a great presentation by Bouygues Telecom's EVP of Strategy Frederic Ruciak highlighting the company's strategy for the launch of LTE in France, A very competitive market, and how the company was able to achieve the number one spot in LTE market share, despite being the "challenger" number 3 in 2 and 3G.

The next buzzword on the hype cycle to point its head is NFV with many operator CTOs publicly hailing the new technology as the magic bullet that will allow them to "launch services in days or weeks rather than years". I am getting quite tired of hearing that rationalization as an excuse for the multimillion investments made in this space, especially when no one seems to know what these new services will be. Right now, the only arguable benefit is on capex cost containment and I have seen little evidence that it will pass this stage in the mid term. Like the teenage sex joke, no one seems to know what it is, but everybody claims to be doing it. 
There is still much to be resolved on this matter and that discussion will continue for some time. The interesting new positioning I heard at the show is appliance vendors referring to their offering as PNF (as in physical) in contrast and as enablers for VNF. Although it sounds like a marketing trick, it makes a lot of sense for vendors to illustrate how NFV inserts itself in a legacy network, leading inevitably to a hybrid network architecture. 

The consensus here seems to be that there are two prevailing strategies for introduction of virtualized network functions. 

  1. The first one, "cap and grow" sees existing infrastructure equipments being capped beyond a certain capacity and little by little complemented by virtualized functions, allowing incremental traffic to find its way on the virtualized infrastructure. A variant might be "cap and burst" where a function subject to bursts traffic is dimensioned on physical assets to the mean peak traffic and all exceeding traffic is diverted to a virtualized function. 
  2. The second seems to favour the creation of vertical virtualized networks for market or traffic segments that are greenfield. M2M and VoLTE being the most cited examples. 

Both strategies have advantages and flaws that I am exploring in my upcoming report on "NFV & virtualization in mobile networks 2014". Contact me for more information.



Thursday, May 29, 2014

NFV & SDN part III: mobile video

I have spent the last couple of months with some of the most brilliant technologists and strategists working on the latest networking technologies, standards and code.
Cloud, SDN, NFV, OpenStack, network virtualization, opendaylight, orchestration...

Everyone looks at making networks more programmable, agile, elastic, intelligent. Some of the sought benefits are faster time to market for new services, lower cost of operation, new revenue from new services, simpler network operation and service orchestration... This is very much about making IT more flexible and cost efficient.

Telcos, wireless vendors and operators are gravitating towards these organizations, hoping to benefit from these progress and implement them in wireless networks.

Here is what I don't quite get:
Mobile is the fastest growing ICT in the world (30% CAGR). Video is the largest (>50% of data volume) and fastest growing service in mobile (75% CAGR). 

Little, if any, of the working groups or organizations I have followed so far have dedicated telco (let alone wireless) working groups and none seem to address the need for next generation video delivery networks.
I am not half as smart as many of the engineers, technologist and strategist contributing to these organizations so I am missing something. Granted, in most cases, these efforts are fairly recent, maybe they haven't gotten to video services yet? It strikes me, though that no one speaks of creating better mobile video networks.

If wireless video is the largest, fastest growing consumer service in the world, shouldn't we, as an industry, look at improving it? A week doesn't go by where a study shows that wireless video streaming demand is increasing and that quality of experience is insufficient.

I am afraid that, as an industry, we are confusing means and goals. Creating better generic networks, using more generic hardware, interfaces and protocols to reduce costs of operation and simplify administration is a noble ambition, but it does not in itself guarantee cost reduction and even less new services. What I have seen so far are more complex network topology with layer upon layer of hierarchical abstraction sure to keep specialized vendors busy and rich for the decades to come.

In parallel, we are seeing opposite moves made by the like of Google, Netflix, Apple, or Facebook. When it comes to launching new services, it doesn't feel that these companies are looking first at network architecture, costs savings, service orchestration, interfaces... I am sure that it gets addressed at some point in the process, but it looks like it starts with the customer. What is the value proposition, what is the service, what is the experience, how will it be charged, who will pay...

Comparing these two processes might be unfair, I agree, but if you are a mobile network operator today, shouldn't you focus your energy on what is the largest and fastest growing service on your network, which happens to not be profitable? 
85% of the video traffic is OTT and you get little revenue from that. You are struggling to deliver an acceptable video quality for a service that is growing and uses already the majority of your resources and you have no plan to improve it. 
Why aren't we looking as an industry at creating a better wireless video network? Start from there and look at what could be the best architecture, interfaces, protocols... I bet the result could be different from our current endeavors. 
None of the above mentioned technology have been designed specifically for video. Of course it is generic networking, so video can be part of it, but I doubt it will be able to deliver the best mobile video experience if not baked-in at the design and architectural phase. Then, if these are not the venue for it, what is?

I am not advocating against SDN, NFV, OpenStack, etc... but I would hope that sooner rather than later, wireless and video specific focus are brought to bear in these organisations. It wouldn't feel right if we found out down the line that we created a great networking framework that is great for IT enterprise but not so good for the most important consumer service. Just saying... 

Thursday, May 15, 2014

NFV & SDN Part II: Clouds & Openstack






I just came back from the OpenStack Summit taking place in Atlanta this week. In my quest to understand better SDN, NFV and Cloud maturity for mobile networks and video delivery, it is an unavoidable step. As announced a couple of weeks ago, this is a new project for me and a new field of interest.
I will chronicle in this blog my progress (or lack thereof) and will use this tool to try and explain my understanding of the state of the technology and the market. 
I am not a scientist and am somewhat slow to grasp new concepts, so you will undoubtedly find much to correct here. I appreciate your gentle comments as I progress.

So... where do we start? Maybe a couple of definitions.
Clouds
What is (are) the cloud(s)? Clouds are environments where software resources can be virtualized and allocated dynamically to instantiate, grow and shut down services.
Public clouds are made available by corporations to consumers and businesses in a commercial fashion. They are usually designed to satisfy a single need (Storage, Computing, Database...). 
The most successful examples can be Amazon Web Services, Google Drive, Apple iCloud, or DropBox. Pricing models are usually per hour rental of computing or database unit or per month rental of storage capacity. We will not address public clouds in this blog.
Private clouds are usually geo-dispersed capabilities federated and instantiated as one logical network capacity for a single company. We will focus here on the implementation of cloud technology in wireless networks. Typical use cases are simple data storage or development or testing sandbox.
Cloud technology relies on Openstack to abstract compute, storage and networking functions into logical elements and to manage heterogeneous virtualized environments. OpenStack is the Operating System of the cloud and it allows to instantiate Infrastructure or platform-as-a-service (respectively IAAS and PAAS).

OpenStack
The OpenStack program is also an open source community started by NASA and Rackspace, now independent and self governed. It essentially functions as a collaborative development community aimed at defining and releasing OpenStack software packages. 
After attending presentations and briefings from Deutsche Telecom, Ericsson, Dell, RedHat, Juniper, Verizon, Intel… I have drawn some very preliminary thoughts I would like to share here:
OpenStack is in its 9th release (IceHouse) and wireless interest is glaringly lacking. It has been setup primarily as an enterprise initiative and while enterprise and telecoms IT share many needs, wireless regulations tend to be much more stringent. CALEA (law enforcement), Sarbanes Oxley (accounting, traceability) are but a few of the provisions that would preclude OpenStack to run today in a commercial telco private cloud.
As presented by Verizon, Deutsche Telekom and other telcos at the summit, the current state of OpenStack does not allow it to be deployed "out of the box" without development and operations teams to patch, adapt and stabilize the system for telco purposes. These patches and tweaks have a negative impact on performance, scalability and latency, because they have not been taken into account at the design phase. They are workarounds rather than fixes. Case studies were presented, ranging from CDN video caching in a wireless infrastructure to generic sandbox for storage and software testing. The results show the lack of maturity of the technology to enable telco-grade services.
There are many companies that are increasingly investing in OpenStack, still I feel that a separate or focused telco working group must be created in its midst if we want it to reach telco-grade applicability.
More importantly, and maybe concerning is my belief that the commercial implementation of the technology requires a corresponding change in organizational setup and behaviour. Migrating to cloud and OpenStack is traditionally associated with the supposed benefits of increasing service roll out, reducing time to market, capex and opex as specialized telco appliance "transcend" to the cloud and are virtualized on off-the-shelf hardware.
There is no free lunch out there. The technology is currently immature, but as it evolves, we start to see that all these abstraction layers are going to require some very specialized skills to deploy, operate and maintain. these skills are very rare right now. Witness HP, Canonical, Intel, Ericsson all advertising "we are hiring" on their booths and during their presentations / keynotes. I have the feeling that operators who want to implement these technologies in the future will simply not have the internal skill set or capacity to roll them out. The large Systems Integrators might end up being the only winners there, ultimately reaping the cost benefits of a virtualized networks, while selling network-as-a-service to their customers.
Network operators might end up trading one vendor lock-in for another, much more sticky if their services run on a third party cloud. (I don't believe, we can realistically talk about service migration from cloud to cloud and vendor to vendor when 2 hypervisors supposedly running standard interfaces can't really coexist today in the same service).

Friday, May 2, 2014

NFV & SDN part I

In their eternal quest to reduce CAPEX, mobile network operators have been egging on telecom infrastructure manufacturers to adopt more open, cost effective computing capabilities.

You will remember close to 15 years ago when all telecom platforms had to be delivered on hardened SUN Solaris SPARC NEBS certified with full fledged Oracle database to be "telecom grade". Little by little, x86 platforms, MySQL databases and Linux OS have penetrated the ecosystem. It was originally a vendor-driven initiative to reduce their third party cost. The cost reduction was passed on to MNOs who were willing to risk implementing these new platforms. We have seen their implementation grow from  greenfield operators in emerging countries, to mature markets first at the periphery of the network, slowing making their way to business-critical infrastructure.

We are seeing today an analogous push to reduce costs further and ban proprietary hardware implementations with NFV. Pushed initially by operators, this initiative sees most network functions first transiting from hardware to software, then being run on virtualized environments on off-the-shelf hardware.

The first companies to embrace NFV have been "startup" like Affirmed Networks. First met with scepticism, the  company seems to have been able to design from scratch and deploy commercially a virtualized Evolved Packet Core in only 4 years. It certainly helps that the company was founded to the tune of over 100 millions dollars from big names such as T-Ventures and Vodafone, providing not only founding but presumably the lab capacity at their parent companies to test and fine tune the new technology.

Since then, vendors have started embracing the trend and are moving more or less enthusiastically towards virtualization of their offering. We have seen emerging different approaches, from the simple porting of their software to Xen or VMWare virtualized environments to more achieved openstack / openflow platforms.

I am actively investigating the field and I have to say some vendors' strategies are head scratching. In some cases, moving to a virtualized environment is counter-productive. Some telecom products are highly CPU intensive / specialized and require dedicated resource to attain high performance, scalability in a cost effective package. Deep packet inspection, video processing seem to be good examples. Even those vendors who have virtualized their appliance / solution when pushed will admit that virtualization will come at a performance cost at the state of the technology today.

I have been reading the specs (openflow, openstack) and I have to admit they seem far from the level of detail that we usually see in telco specs to be usable. A lot of abstraction, dedicated to redefining switching, not much in term of call flow, datagram, semantics, service definition, etc...

How the hell does one go about launching a service in a multivendor environment? Well, one doesn't. There is a reason why most NFV initiative are still at the plumbing level, investigating SDN, SDDC, etc... Or single vendor / single service approach. I haven't been convinced yet by anyone's implementation of multi vendor management, let alone "service orchestration". We are witnessing today islands of service virtualization in hybrid environments. We are still far from function virtualization per se.

The challenges are multiple: 
  • Which is better?: A dedicated platform with low footprint / power requirement that might be expensive and centralized or thousand of virtual instances occupying hundreds of servers that might be cheap (COTS) individually but collectively not very cost or power efficient?
  • Will network operator trade Capex for Opex when they need to manage thousand of applications running virtually on IT platforms? How will their personnel trained to troubleshoot problems following the traffic and signalling path will adapt to this fluid non-descript environment? 
We are still early in this game, but many vendors are starting to purposefully position themselves in this space to capture the next wave of revenue. 

Will the lack of a programmable multi vendor control environment force network operators to ultimately be virtualized themselves, relinquishing network management to the large IT and telecom equipment manufacturers? This is one of the questions I will attempt to answer going forward as I investigate in depth the state of the technology and compare it with the vendors and MNOs claims and assertions.
Stay tuned, more to come with a report on the technology, market trends and vendors capabilities in this space later on this year.