You will remember close to 15 years ago when all telecom platforms had to be delivered on hardened SUN Solaris SPARC NEBS certified with full fledged Oracle database to be "telecom grade". Little by little, x86 platforms, MySQL databases and Linux OS have penetrated the ecosystem. It was originally a vendor-driven initiative to reduce their third party cost. The cost reduction was passed on to MNOs who were willing to risk implementing these new platforms. We have seen their implementation grow from greenfield operators in emerging countries, to mature markets first at the periphery of the network, slowing making their way to business-critical infrastructure.
We are seeing today an analogous push to reduce costs further and ban proprietary hardware implementations with NFV. Pushed initially by operators, this initiative sees most network functions first transiting from hardware to software, then being run on virtualized environments on off-the-shelf hardware.
The first companies to embrace NFV have been "startup" like Affirmed Networks. First met with scepticism, the company seems to have been able to design from scratch and deploy commercially a virtualized Evolved Packet Core in only 4 years. It certainly helps that the company was founded to the tune of over 100 millions dollars from big names such as T-Ventures and Vodafone, providing not only founding but presumably the lab capacity at their parent companies to test and fine tune the new technology.
Since then, vendors have started embracing the trend and are moving more or less enthusiastically towards virtualization of their offering. We have seen emerging different approaches, from the simple porting of their software to Xen or VMWare virtualized environments to more achieved openstack / openflow platforms.
I am actively investigating the field and I have to say some vendors' strategies are head scratching. In some cases, moving to a virtualized environment is counter-productive. Some telecom products are highly CPU intensive / specialized and require dedicated resource to attain high performance, scalability in a cost effective package. Deep packet inspection, video processing seem to be good examples. Even those vendors who have virtualized their appliance / solution when pushed will admit that virtualization will come at a performance cost at the state of the technology today.
I have been reading the specs (openflow, openstack) and I have to admit they seem far from the level of detail that we usually see in telco specs to be usable. A lot of abstraction, dedicated to redefining switching, not much in term of call flow, datagram, semantics, service definition, etc...
How the hell does one go about launching a service in a multivendor environment? Well, one doesn't. There is a reason why most NFV initiative are still at the plumbing level, investigating SDN, SDDC, etc... Or single vendor / single service approach. I haven't been convinced yet by anyone's implementation of multi vendor management, let alone "service orchestration". We are witnessing today islands of service virtualization in hybrid environments. We are still far from function virtualization per se.
The challenges are multiple:
- Which is better?: A dedicated platform with low footprint / power requirement that might be expensive and centralized or thousand of virtual instances occupying hundreds of servers that might be cheap (COTS) individually but collectively not very cost or power efficient?
- Will network operator trade Capex for Opex when they need to manage thousand of applications running virtually on IT platforms? How will their personnel trained to troubleshoot problems following the traffic and signalling path will adapt to this fluid non-descript environment?
We are still early in this game, but many vendors are starting to purposefully position themselves in this space to capture the next wave of revenue.
Stay tuned, more to come with a report on the technology, market trends and vendors capabilities in this space later on this year.
No comments:
Post a Comment