Showing posts with label orchestration. Show all posts
Showing posts with label orchestration. Show all posts

Thursday, July 31, 2025

The Orchestrator Conundrum strikes again: Open RAN vs AI-RAN

10 years ago (?!) I wrote about the overlaps and potential conflicts of the different orchestration efforts between SDN and NFV. Essentially, observing that, ideally, it is desirable to orchestrate network resources with awareness of services and that service and resource orchestration should have hierarchical and prioritized interactions, so that a service deployment and lifecycle is managed within resource capacity and when that capacity fluctuates, priorities can be enforced.

Service orchestrators have not really been able to be successfully deployed at scale for a variety a reasons, but primarily due to the fact that this control point was identified early on as a strategic effort for network operators and traditional network vendors. A few network operators attempted to create an open source orchestration model (Open Source MANO), while traditional telco equipment vendors developed their own versions and refused to integrate their network functions with the competition. In the end, most of the actual implementation focused on Virtual Infrastructure Management (VIM) and vertical VNF management, while orchestration remained fairly proprietary per vendor. Ultimately, Cloud Native Network Functions appeared and were deployed in Kubernetes inheriting its native resource management and orchestration capabilities.

In the last couple of years, Open RAN has attempted to collapse RAN Element Management Systems (EMS), Self Organizing Networks (SON) and Operation Support Systems (OSS) with the concept of Service Management and Orchestration (SMO). Its aim is to ostensibly provide a control platform for RAN infrastructure and services in a multivendor environment. The non real time RAN Intelligent Controller (RIC) is one of its main artefacts, allowing the deployment of rApps designed to visualize, troubleshoot, provision, manage, optimize and predict RAN resources, capacity and capabilities.

This time around, the concept of SMO has gained substantial ground, mainly due to the fact that the leading traditional telco equipment manufacturers were not OSS / SON leaders and that Orchestration was an easy target for non RAN vendors wanting to find a greenfield opportunity. 

As we have seen, whether for MANO or SMO, the barriers to adoption weren't really technical but rather economic-commercial as leading vendors were trying to protect their business while growing into adjacent areas.

Recently, AI-RAN as emerged as an interesting initiative, positing that RAN compute would evolve from specialized, proprietary and closed to generic, open and disaggregated. Specifically, RAN compute could see an evolution, from specialized silicon to GPU. GPUs are able to handle the complex calculations necessary to manage a RAN workload, with spare capacity. Their cost, however, greatly outweighs their utility if used exclusively for RAN. Since GPUs are used in all sorts of high compute environments to facilitate Machine Learning, Artificial Intelligence, Large and Small Language Models, Models Training and inference, the idea emerged that if RAN deploys open generic compute, it could be used both for RAN workloads (AI for RAN), as well as workloads to optimize the RAN (AI on RAN and ultimately AI/ML workloads completely unrelated to RAN (AI and RAN).

While this could theoretically solve the business case of deploying costly GPUs in hundreds of thousands of cell site, provided that the compute idle capacity could be resold as GPUaaS or AIaaS, this poses new challenges from a service / infrastructure orchestration standpoint. AI RAN alliance is faced with understanding orchestration challenges between resources and AI workloads

In an open RAN environment. Near real time and non real time RICs deploy x and r Apps. The orchestration of the apps, services and resources is managed by the SMO. While not all App could be categorized as "AI", it is likely that SMO will take responsibility for AI for and on RAN orchestration. If AI and RAN requires its own orchestration beyond K8, it is unlikely that it will be in isolation from the SMO.

From my perspective, I believe that the multiple orchestration, policy management and enforcement points will not allow a multi vendor environment for the control plane. Architecture and interfaces are still in flux, specialty vendors will have trouble imposing their perspective without control of the end to end architecture. As a result, it is likely that the same vendor will provide SMO, non real time RIC and AI RAN orchestration functions (you know my feelings about near real time RIC)

If you make the Venn diagram of vendors providing / investing in all three, you will have a good idea of the direction the implementation will take.

Monday, October 2, 2023

DOCOMO's 30% TCO Open RAN savings

DOCOMO announced last week, during Mobile World Congress Las Vegas the availability of its OREX offering for network operators. OREX, which stands for Open RAN Experience, was initially introduced by the Japanese operator in 2021 as OREC (Open RAN Ecosystem).

The benefits claimed by DOCOMO are quite extraordinary, as they expect to "reduce clients’ total cost of ownership by up to 30% when the costs of initial setup and ongoing maintenance are taken into account. It can also reduce the time required for network design by up to 50%. Additionally, OREX reduces power consumption at base stations by up to 50%".

The latest announcement clarifies DOCOMO's market proposition and differentiation. Since the initial communications of OREX, DOCOMO was presenting to the market a showcase of validated Open RAN blueprint deployments that the operator had carried out in its lab. What was unclear was the role DOCOMO wanted to play. Was the operator just offering best practice and exemplar implementation or were they angling for a different  play? The latest announcement clarifies DOCOMO's ambitions.

On paper, the operator showed an impressive array of vendors, collaborating to provide multi vendor Open RAN deployments, with choices and some possible permutations between each element of the stack. 


At the server layer, OREX provided options from DELL, HP and Fujitsu, all on x86 platforms, with various acceleration ASICS/FPGA... from Intel FlexRAN, Qualcomm, AMD and nvidia. While the COTS servers are readily interchangeable, the accelerator layer binds the open RAN software vendor and is not easily swappable.

At the virtualization O-Cloud layer, DOCOMO has integrated vmware, Red Hat, and WNDRVR which represents the current best of breed in that space.

The base station software CU / DU has seen implementations from Mavenir, NTT Data, and Fujitsu. 

What is missing in this picture and a little misleading is the Open Radio Unit vendors that have participated in these setups, since this where network operators need the most permutability. As of today, most Open RAN multi vendor deployments will see a separate vendor in the O-RU and CU/DU space. This is due to the fact that no single vendor today can satisfy the variety of O-RUs necessary to meet all spectrum / form factors a brownfield operator needs. More details about this in my previous state of Open RAN post here.

In this iteration, DOCOMO has clarified the O-RU vendors it has worked with most recently (Dengyo Technology, DKK Co, Fujitsu, HFR, Mavenir, and Solid). As always the devil is in the detail and unfortunately DOCOMO falls short from providing  a more complete view of the types of O-RU (mMIMO or small cell?) and the combination of O-RU vendor - CU/DU vendor - Accelerator vendor - band, which is ultimately the true measure of how open this proposition would be.

What DOCOMO clarifies most in this latest iteration, is their contribution and the role they expect to play in the market space. 

First, DOCOMO introduces their Open RAN compliant Service Management and Orchestration (SMO). This offering is a combination of NTT DOCOMO developments and third party contributions (details can be found in my report and workshop Open RAN RICs and Apps 2023). The SMO is DOCOMO's secret sauce when it comes to the claimed savings, resulting mainly from automation of design, deployment and maintenance of the Open RAN systems, as well as RU energy optimization.


At last, DOCOMO presents their vast integration experience and is now proposing these systems integration, support and maintenance services. The operator seeks the role of specialized SI and prime contractor for these O-RAN projects.

While DOCOMO's experience is impressive and has led many generations of network innovation, the latest movement to transition from leading operator and industry pioneer to O-RAN SI and vendor is reminiscent of other Japanese companies such as Rakuten with their Symphony offering. Japanese operators and vendors see the contraction of their domestic market as a strategic threat to their core business and they try to replicate their success overseas. While quite successful in greenfield environments, the hypothesis that brownfield operators (particularly tier 1) will buy technology and services from another carrier (even if not geographically competing) still needs to be validated. 

Monday, July 17, 2023

Open RAN technical priorities release 3


The Open RAN technical priorities release 3, was published in March 2023 by Deutsche Telekom, Orange, Telefonica, TIM and Vodafone as part of the Open RAN MoU group at the Telecom Infra Project.

A review of the mandatory, highest priority unanimous requirements shed lights on what the big 5 operators consider essential for vendors to focus on this year, and more importantly highlights how much efforts are still necessary by the industry to meet markets expectations.

Scenarios

In this section, the big 5 regard virtualized DU and CU with open Front Haul on site as a must, for Macro and indoor / outdoor small cell deployments. This indicates that 7.2.x remains the interface of choice, despite recent attempts by other vendors to change its implementation. It also shows that as a first step, at least, they are looking at deploying Open RAN in the conventional fashion, replacing traditional e/g Node  B with like-for-like O-RU, DU. CU on site. The benefits of resource pooling due to disaggregation and virtualization, enabling either CU or CU and DU to be centralized is the highest priority by the majority of operators, but not all yet. Network sharing of O-RU and vDU/CU is also a highest priority for the majority of operators.

Security

The security requirements have increased dramatically in this latest version, with the vast majority of the requirements (166 out of 180) considered highest priority by al the MoU operators. This evolution marks the effort that have been dedicated to the topic over the last 24 months. Open RAN has been openly criticized and accused of lax security and the O-RAN alliance has dedicated a working group to assess and shore up criticism in that space. My assessment is that most of the security concerns of Open RAN are either linked to virtualization / O-Cloud implementation or just mechanically a result of having more open interfaces, providing more attack surfaces. Open RAN is not inherently more or less secure than 3GPP implementation and the level of security by design necessary to satisfy the criticisms we have seen in the media is not today implemented by traditional RAN vendors either. Having said that, the requirements now spell out exhaustively the level of admission control, authentication, encryption, certification necessary for each interface, for each infrastructure block and for their implementation in a cloud native containerized environment.

O-Cloud Infrastructure (CaaS)

The O-Cloud requirements are focused on ensuring a cloud-native architecture, while allowing acceleration hardware whenever necessary. As a result, the accent is put on bare metal or IaaS implementations of Kubernetes, with FPGA, eAsic, GPU acceleration support and management. The second theme that is prevalent in O-Cloud unanimous high priority requirements is the lifecycle management features which indicate a transition from the lab to more mature commercial implementations going forward.


CU and DU requirements

First and foremost the big 5 unanimously are looking at virtualized and containerized implementations of O-CU/O-DU with both look-aside and inline acceleration (this is contradictory, but I assume either one is acceptable). The next requirements are the usual availability, scalability, and performance related requirements we find in generic legacy RAN systems. All O-RAN interfaces support are mandatory.
Interestingly, power consumption targets are now spelled out per scenario.

RU requirements

The Radio Units requirements area good illustration of the difficulty to create a commercially viable Open RAN solution at scale. While all operators claim highest urgent priority for a variety of Radio Units with different form factors (2T2R, 2T4R, 4T4R, 8T8R, 32T32R, 64T64R) in a variety of bands (B1, B3, B7, B8, B20, B28B, B32B/B75B, B40, B78...) and with multi band requirements (B28B+B20+B8, B3+B1, B3+B1+B7), there is no unanimity on ANY of these. This leads vendors trying to find which configurations could satisfy enough volume to make the investments profitable in a quandary. There are hidden dependencies that are not spelled out in the requirements and this is where we see the limits of the TIP exercise. Operators cannot really at this stage select 2 or 3 new RU vendors for an open RAN deployment, which means that, in principle, they need vendors to support most, if not all of the bands and configurations they need to deploy in their respective network. Since each network is different, it is extremely difficult for a vendor to define the minimum product line up that is necessary to satisfy most of the demand. As a result, the projections for volume are low, which makes the vendors only focus on the most popular configurations. While everyone needs 4T4R or 32T32R in n78 band, having 5 vendor providing options for these configurations, with none delivering B40 or B32/B75 makes it impossible for operators to select a single vendor and for vendors to aggregate sufficient volume to create a profitable business case for open RAN.
The other RU related requirements helpfully spell out the power consumption, volume and weight targets for each type of configuration.

Open Front Haul requirements

There are no changes in the release 3, which shows the maturity of the interface implementation.

RAN features

The RAN features of the highest priority unanimously required by the big 5 operators remain mostly unchanged and emphasize the need for multi connectivity. Dual connectivity between 4G and 5G is essential for any western european operator to contemplate mass deployment of open RAN or replacement of their Chinese RAN vendor. The complexity does not stop to the support of the connectivity, but also necessitate advanced features such as Dynamic Spectrum Sharing (DSS) and Carrier Aggregation (CA) which is a complexity multiplier when associated with the RU band support requirements. These advanced features are probably some of the highest barriers to entry for new vendors in the space, as they have been developed for years by traditional vendors and require a high level of technological maturity and industrialization.

Near-RT RIC

The requirements for the Near-Real Time RAN Intelligent Controller are extremely ambitious. While they technically would enable better control of a multi-vendor RAN operation, they are unlikely to succeed in the short to medium term, in my opinion, as per previous analysis.

SMO and Non-RT RIC

The requirements for Service Management and Orchestration and Non-Real Time RIC are fairly mature and provide a useful framework for RAN domain automation and lifecycle management. The accent in this release is put on AI/ML support and management, which shows that the operators have been seduced by the promises of the technology, allowing a zero touch, automated, network, relying on historical analysis and predictive algorithms. The requirements are fairly high level, suggesting that the operators themselves might not have yet very clear targets in terms of algorithmics policy, performance and management.

In conclusion, this document provides useful data on Open RAN maturity and priorities. While the release 3 shows great progress in many aspects, it still fails to provide sufficient unanimous guidance from a commercial standpoint on the minimum set of end to end capabilities a vendor could reasonably develop to be selected for deployment at scale in these western european networks.

Tuesday, March 21, 2017

What is left for operator to enable SDN and NFV?

Debate: What is left for operator to enable SDN and NFV?





In a live debate held last week at Mobile World Congress, Patrick Lopez, VP Networks Innovation, Telefonica, and Manish Singh, VP Product Management, SDN & NFV, Tech Mahindra, joined TMN editor Keith Dyer to discuss what operators are hoping to achieve with the adoption of NFV and SDN.
The panel asked what the end goals are, and looked at the progress operators have made so far, picking out key challenges that operators still face around integration, certification and onboarding of VNFs, interoperability, the role of orchestration and the different Open Source approaches to NFV MANO.
The panel also looked at how operators can adapt their own cultures to act in a more agile way, adopting continuous integration and DevOps models.
Key quotes:
Lopez: “The end game is the ability to create services that are more customer-centric and enable operators to provide real value to consumers, things and enterprises by providing experiences that are tailored for them. And to be able to do that you need to have an infrastructure that is very elastic and very agile – that’s where SDN and NFV comes in.”
Singh: “As we dis-aggregate the hardware from the software, and get to this virtualised infrastructure layer where different network functions are orchestrated – integration, performance characterisation, capacity planning and onboarding all become challenges that need to be addressed
Singh: “There has been ecosystem fragmentation in the orchestration layer and for the VNF vendors that was creating challenges in terms of, ‘How many orchestrators, how many VIMs on the infrastructure layer do I support?'”
Lopez: “It’s really hard to create an industry that is going to grow if we don’t all share the same DNA.”
Singh: “The good news is there is a vibrant ecosystem, and I think having a couple of key alternatives as we drive forward is a good thing. And we see an inflection point where a new way of standardising things is coming up, and that really sets the way for 5G.”
Lopez: “You cannot implement automation well if you don’t understand how you have deployed that NFV-SDN technology. You need to implement that itself to understand the gotchas to be able to automate.”
Singh: “As we look at SDN NFV the other key aspect is the ability to bring new player, VNFs and components into the fold and we are enabling that to be done cost effectively, efficiently and rapidly.”
Lopez: “It [SDN-NFV] works, we can achieve the main core benefits of the technology. It can do what we were planning to do – to run a software defined network. We are there, now it is about optimising it and making it run better and automating it.

Wednesday, January 25, 2017

World's first ETSI NFV Plugfest

As all know in the telecom industry, the transition from standard to implementation can be painful, as vendors and operators translate technical requirements and specifications into code. There are always room for interpretation and desires to innovate or differentiate that can lead to integration issues. Open source initiatives have been able to provide viable source code for implementation of elements and interfaces and they are a great starting point. The specific vendors and operators’ implementations still need to be validated and it is necessary to test that integration needs are minimal.

Networks Function Virtualization (NFV) is an ETSI standard that is a crucial element of telecom networks evolution as operators are looking at their necessary transformation to accommodate the hyper growth resulting from video services moving to online and mobile.

As a member of the organization’s steering committee, I am happy to announce that the 5G open lab 5Tonic will be hosting the world’s first ETSI NFV plugfest from January 23 to February 3, 2017 with the technical support of Telefonica and IMDEA Networks Institute.  

5Tonic is opening its doors to the NFV community, comprising network operators, vendors and open source collaboration initiatives to assert and compare their implementations of Virtual Network Functions (VNFs), NFV Infrastructure and Virtual Infrastructure Manager. Additionally, implementations of Management and Orchestrations (MANO) functions will also be available.

43 companies and organizations have registered to make this event the largest in NFV interoperability in the world.

Companies:
•           Telefonica
•           A10
•           Cisco
•           Canonical
•           EANTC
•           EHU
•           Ensemble
•           Ericsson
•           F5
•           Fortinet
•           Fraunhofer
•           HPE
•           Huawei
•           Inritsu
•           Intel
•           Italtel
•           Ixia
•           Keynetic
•           Lenovo
•           Mahindra
•           Openet
•           Palo Alto
•           Radware
•           RIFT.io
•           Sandvine
•           Sonus
•           Spirent
•           RedHat
•           VMWare
•           WIND

Open source projects:
•           OSM (Open Source MANO)
•           Open Baton
•           Open-O
•           OPNFV

 OSM is delivering an open source MANO stack aligned with ETSI NFV Information Models. As an operator-led community, OSM is offering a production-quality open source MANO stack that meets the requirements of commercial NFV networks.

Testing will take place on site at the 5TONIC lab near Madrid, as well as virtually for remote participants.


Tuesday, June 21, 2016

SDN / NFV: Enemy of the state

Extracted from my SDN and NFV in wireless workshop.

I want to talk today about an interesting subject I have seen popping up over the last six months or so and in many presentations in the stream I chaired at the NFV world congress a couple of months ago.

In NFV and to a certain extent in SDN as well, service availability is achieved through a combination of functions redundancy and fast failover routing whenever a failure is detected in the physical or virtual fabric. Availability is a generic term, though and covers different expectations whether you are a consumer, operator or enterprise. The telecom industry has heralded the mythical 99.999% or five nines availability as the target to reach for telecoms equipment vendors.

This goal has led to networks and appliances that are super redundant, at the silicon, server, rack and geographical levels, with complex routing, load balancing and clustering capabilities to guarantee that element failures do not impact catastrophically services. In today's cloud networks, one arrives to the conclusion that a single cloud, even tweaked can't performed beyond three nines availability and that you need a multi-cloud strategy to attain five nines of service availability...

Consumers, over the last ten years have proven increasingly ready to accept a service that might not be always of the best quality if the price point is low enough. We all remember the start of skype when we would complain of failed and dropped calls or voice distortions, but we all put up with it mostly because it was free-ish. As the service quality improved, new features and subscriptions schemes were added, allowing for new revenues as consumers adopted new services.
One could think from that example that maybe it is time to relax the five nines edict from telecoms networks but there are two data points that run counter to that assumption.


  1. The first and most prominent reason to keep a high level of availability is actually a regulatory mandate. Network operators operate not only a commercial network but also a series of critical infrastructure for emergency and government services. It is easy to think that 95 or 99% availability is sufficient until you have to deliver 911 calls, where that percentage difference means loss of life.
  2. The second reason is more innate to network operators themselves. Year after year, polls show that network operators believe that the way they outcompete each others and OTTs in the future is quality of service, where service availability is one of the first table stakes. 


As I am writing this blog, SDN and NFV in wireless have struggled through demonstrating basic load balancing and static traffic routing, to functions virtualization and auto scaling over the last years. What is left to get commercial grade (and telco grade) offerings is resolving the orchestration bit (I'll write another post on the battles in this segment) and creating a service that is both scalable and portable.

The portable bit is important, as a large part of the value proposition is to be able to place functions and services closer to the user or the edge of the network. To do that, an orchestration system has to be able to detect what needs to be consumed where and to place and chain relevant functions there.
Many vendors can demonstrate that part. The difficulty arises when it becomes necessary to scale in or down a function or when there is a failure.

Physical and virtual functions failure are to be expected. When they arise in today's systems, there is a loss of service, at least for the users that were using these functions. In some case, the loss is transient and a new request / call will be routed to another element the second time around, in other cases, it is permanent and the session / service cannot continue until another one is started.

In the case of scaling in or down, most vendors today will starve the virtual function and route all new requests to other VMs until this function can be shut down without impact to live traffic. It is not the fastest or the most efficient way to manage traffic. You essentially lose all the elasticity benefits on the scale down if you have to manage these moribund zombie-VNFs until they are ready to die.

Vendors and operators who have been looking at these issues have come to a conclusion. Beyond the separation of control and data plane, it is necessary to separate further the state of each machine, function service and to centralize it in order to achieve consistent availability, true elasticity and manage disaster recovery scenarios.

In most cases, this is a complete redesign for vendors. Many of them have already struggled to port their product to software, then port it to hypervisor, then optimized for performance... separating state from the execution environment is not going to be just another port. It is going to require redesign and re architecting.

The cloud-native vendors who have designed their platform with microservices and modularity in mind have a better chance, but there is still a series of challenges to be addressed. Namely, collecting state information from every call in every function, centralizing it and then redistribute it is going to create a lot of signalling traffic. Some vendors are advocating some inline signalling capabilities to convey the state information in a tokenized fashion, others are looking at more sophisticated approaches, including state controllers that will collect, transfer and synchronize relevant controllers across clouds.
In any case, it looks like there is still quite a lot of work to be done in creating truly elastic and highly available virtualized, software defined network.

Tuesday, March 1, 2016

Mobile World Congress 16 hype curve

Mobile World Congress 2016 was an interesting show in many aspects. Here are some of my views on most and least hyped subjects, including mobile video, NFV, SDN, IoT, M2M, augmented and virtual reality, TCP optimization, VoLTE and others

First, let start with mobile video, my pet subject, as some of you might know. 2016 sees half of Facebook users to be exclusively mobile, generating over 3/4 of the company's revenue while half of YouTube views are on mobile devices and nearly half of Netflix under 34 members watch from a mobile device. There is mobile and mobile, though and a good 2/3 of these views occur on wifi. Still, internet video service providers see themselves becoming mobile companies faster than they thought. The result is increased pressure on mobile networks to provide fast, reliable video services, as 2k, 4K, 360 degrees video, augmented and virtual reality are next on the list of services to appear. This continues to create distortions to the value chain as encryption, ad blocking, privacy, security, net neutrality, traffic pacing and prioritization are being used as weapons of slow attrition by traditional and new content and service providers. On the network operators' side, many have deserted the video monetization battlefield. T-Mobile's Binge On seems to give MNOs pause for reflection on alternative models for video services cooperation. TCP optimization has been running hot as a technology for the last 18 months and has seen Teclo Networks acquired by Sandvine on the heels of this year's congress.

Certainly, I have felt that we have seen a change of pace and tone in many announcements, with NFV hyperbolic claims subsiding somewhat compared to last year. Specifically, we have seen several vendors live deployments, but mostly revolving around launches of VoLTE, virtualized EPC for MVNOs, enterprise or verticals and ubiquitous virtualized CPE but still little in term of multi-vendor generic traffic NFV deployments at scale. Talking about VoLTE, I now have several anecdotal evidence from Europe, Asia and North America that the services commercially launched are well below expectation in term of quality an performance against circuit switched voice.
The lack of maturity of standards for Orchestration is certainly the chief culprit here, hindering progress for open multi vendor service automation. 
Proof can be found in the flurry of vendors "ecosystems". If everyone works so hard to be in one and each have their own, it underlines the market fragmentation rather than reduces it. 
An interesting announcement showed Telefonica, BT, Korea Telecom, Telekom Austria, SK, Sprint,  and several vendors taking a sheet from OPNFV's playbook and creating probably one of the first open-source project within ETSI, aimed at delivering a MANO collaborative project,.
I have been advocating for such a project for more than 18 months, so I certainly welcome the initiative, even if ETSI might not feel like the most natural place for an open source project. 

Overall, NFV feels more mature, but still very much disconnected from reality. A solution looking for problems to solve, with little in term of new services creation. If all the hoopla leads to cloud-based VPNs, VoLTE and cheaper packet core infrastructure, the business case remains fragile.

The SDN announcements were somewhat muted, but showing good progress in SD-WAN, and SD data center architecture with the recognition, at last, that specialized switches will likely still be necessary in the short to medium term if we want high performance software defined fabric - even if it impacts agility. The compromises are sign of market maturing, not a failure to deliver on the vendors part in my opinion.

IoT, M2M were still ubiquitous and vague, depicted alternatively as next big thing or already here. The market fragmentation in term of standards, technology, use cases and understanding leads to baseless fantasist claims from many vendors (and operators) on the future of wearable, autonomous transports, connected objects... with little in term of evidence of a coherent ecosystem formation. It is likely that a dominant player will emerge and provide a top-down approach, but the business case seems to hinge on killer-apps that hint a next generation networks to be fulfilled.

5G was on many vendors' lips as well, even if it seems to consistently mean different things to different people, including MIMO, beam forming, virtualized RAN... What was clear, from my perspective was that operators were ready at last to address latency (as opposed or in complement of bandwidth) as a key resource and attribute to discriminate services and associated network slices.

Big Data slid right down the hype curve this year, with very little in term of  announcement or even reference in vendors product launches or deployments. It now seems granted that any piece of network equipment, physical or virtual must generate rivulets that stream to rivers and data lakes, to be avidly aggregated, correlated by machine learning algorithms to provide actionable insights in the form of analytics and alerts. Vendors show progress in reporting, but true multi vendors holistic analytics remains extremely difficult, due to the fragmentation of vendors data attributes and the necessity to have both data scientists and subject matter experts working together to discriminate actionable insights from false positives.

On the services side, augmented and virtual reality were revving up to the next hype phase with a multitude of attendees walking blindly with googles and smartphones stuck to their face... not the smartest look and unlikely to pass novelty stage until integrated in less obtrusive displays. On the AR front, convincing use cases start to emerge, such as furniture shopping (whereas you can see and position furniture in your home by superimposing them from a catalogue app), that are pragmatic and useful without being too cumbersome. Anyone who had to shop for furniture and send it back because it did not fit or the color wasn't really the same as the room will understand. 
Ad blocking certainly became a subject of increased interest, as operators and service providers are still struggling for dominance. As encrypted data traffic increases, operators start to explore ways to provide services that users see as valuable and if they hurt some of the OTTs business models, it is certainly an additional bargaining chip. The melding and reforming of the mobile value chain continues and accelerates with increased competition, collaboration and coopetition as MNOs and OTTs are finding a settling position. I have recently ranted about what's wrong with the mobile value chain, so I will spare you here.

At last, my personal interest project this year revolves around Mobile Edge Computing. I have started production on a report on the subject. I think the technology has potential unlock many new services in mobile networks and I can't wait to tell you more about it. Stay tuned for more!

Thursday, January 22, 2015

The future is cloudy: NFV 2020

As the first phase of ETSI ISG NFV wraps up and phase 1's documents are being released, it is a good time to take stock of the progress to date and what lies ahead.

ETSI members have set an ambitious agenda to create a function and service virtualization strategy for broadband networks, aiming at reducing hardware and vendor dependency while creating an organic, automated, programmable network.

The first set of documents approved and published represents a great progress and possibly one of the fastest achievement for a new standard to be rolled out; in only two years. It also highlights how much work is still necessary to make the vision a reality.

Vendors announcements are everywhere, "NFV is a reality, it is happening, it works, you can deploy it in your networks today...". I have no doubt Mobile World Congress will see several "world's first commercial deployment of [insert your vLegacyProduct here]...". The reality is a little more nuanced.

Network Function Virtualization, as a standard does not allow today a commercial deployment out of the box. There are too many ill-defined interfaces, competing protocols, missing API to make it plug and play. The only viable deployment scenario today is from single vendor or tightly integrated (proprietary) dual vendor strategies for silo services / functions. From relatively simple (Customer Premise Equipment) to very complex (Evolved Packet Core), it will possible to see commercial deployments in 2015, but they will not be able to illustrate all the benefits of NFV.

As I mentioned before, orchestration, integration with SDN, performance, security, testing, governance... are some of the challenges that remain today for viable commercial deployment of NFV in wireless networks. These are only the technological challenges, but as mentioned before, operational challenges to evolve and train the workforce at operators is probably the largest challenge.

From my many interactions and interviews with network operators, it is clear that there are several different strategies at play.

  1. The first strategy is to roll out a virtualized function / service with one vendor, after having tested, integrated, trialed it. It is a strategy that we are seeing a lot in Japan or Korea, for instance. It provides a pragmatic learning process towards implementing virtualized function in commercial networks, recognizing that standards and vendors implementations will not be fully interoperable before a few years.
  2. The second strategy is to stimulate the industry by standards and forum participation, proof of concepts, and even homegrown development. This strategy is more time and resource-intensive but leads to the creation of an ecosystem. No big bang, but an evolutionary, organic roadmap that picks and chooses which vendor, network element, services are ready for trial, poc, limited and commercial deployment. The likes of Telefonica and Deutsche Telekom are good examples of this approach.
  3. The third strategy is to define very specifically the functions that should be virtualized, their deployment, management and maintenance model and select a few vendors to enact this vision. AT&T is a good illustration here. The advantage is probably to have a tailored experience that meets their specific needs in a timely fashion before standards completion, the drawback being the flexibility as vendors are not interchangeable and integration is somewhat proprietary.
  4. The last strategy is not a strategy, it is more a wait and see approach. Many operators do not have the resource or the budget to lead or manage this complex network and business transformation. they are observing the progress and placing bets in term of what can be deployed when.
As it stands, I will continue monitoring and chairing many of the SDN / NFV shows this year. My report on SDN / NFV in wireless networks is changing fast, as the industry is, so look out for updates throughout 2015.

Monday, October 20, 2014

Report from SDN / NFV shows part I

Wow! last week was a busy week for everything SDN / NFV, particularly in wireless. My in-depth analysis of the segment is captured in my report. Here are a few thoughts on the last news.

First, as is now almost traditional, a third white paper was released by network operators on Network Functions Virtualizations. Notably, the original group of 13 who co-wrote the first manifesto that spurred the creation of ETSI ISG NFV has now grown to 30. The Industry Specification Group now counts 235 companies (including yours truly) and has seen 25 Proof of Concepts initiated. In short the white paper announces another 2 year term of effort beyond the initial timeframe. This new phase will focus on multi-vendor orchestration operability, and integration with legacy OSS/BSS functions.

MANO (orchestration) remains a point of contention and many start to recognise the growing threat and opportunity the function represents. Some operators (like Telefonica) seem actually to have reached the same conclusions as I in this blog and are starting to look deeply into what implementing MANO means for the ecosystem.

I will go today a step further. I believe that MANO in NFV has the potential to evolve the same way as the app stores in wireless. It is probably an apt comparison. Both are used to safekeep, reference, inventory, manage the propagation and lifecycle of software instances.

In both cases, the referencing of the apps/VNF  is a manual process, with arbitrary rules that can lead to dominant position if not caught early. It would be relatively easy, in this nascent market to have an orchestrator integrate as many VNFs as possible, with some "extensions" to lock-in this segment like Apple and Google did with mobiles.

I know, "Open" is the new "Organic", but for me, there is a clear need to maybe create an open source MANO project, lets call it "OpenHand"?

You can view below a mash-up of the presentations I gave at the show last week and the SDN & NFV USA in Dallas the week before below.



More notes on these past few weeks soon. Stay tuned.

Tuesday, September 30, 2014

NFV & SDN 2014: Executive Summary


This Post is extracted from my report published October 1, 2014. 

Cloud and Software Defined Networking have been technologies explored successively in academia, IT and enterprise since 2011 and the creation of the Open Networking Foundation. 
They were mostly subjects of interest relegated to science projects in wireless networks until, in the fall of 2013, a collective of 13 mobile network operators co-authored a white paper on Network Functions Virtualization. This white paper became a manifesto and catalyst for the wireless community and was seminal to the creation of the eponymous ETSI Industry Standardization Group. 
Almost simultaneously, AT&T announced the creation of a new network architectural vision – Domain 2.0, heavily relying on SDN and NFV as building blocks for its next generation mobile network.

Today, SDN and NFV are hot topics in the industry and many companies have started to position themselves with announcements, trials, products and solutions.

 This report is the result of hundreds of interviews, briefings and meetings with many operators and vendors active in this field. In the process, I have attended, participated, chaired various events such as OpenStack, ETSI NFV ISG, SDN & OpenFlow World Congress and became a member at ETSI, OpenStack and TM Forum.
The Open Network Foundation, the Linux Foundation, OpenStack, the OpenDaylight project, IEEE, ETSI, the TM Forum are just a few of the organizations who are involved in the definition, standardization or facilitation of cloud, SDN and NFV. This report provides a view on the different organizations contribution and their progress to date.

Unfortunately, there is no such thing as SDN-NFV today. These are technologies that have overlaps and similarities but stand apart widely. Software Defined Network is about managing network resources. It is an abstraction that allows the definition and management of IP networks in a new fashion. It separates data from control plane and allows network resources to be orchestrated and used across applications independently of their physical location. SDN exhibits a level of maturity through a variety of contributions to its leading open-source contribution community, OpenStack. In its ninth release, the architectural framework is well suited for abstracting cloud resources, but is dominated by enterprise and general IT interests, with little in term of focus and applicability for wireless networks.

Network Function Virtualization is about managing services. It allows the breaking down and instantiation of software elements into virtualized entities that can be invoked, assembled, linked and managed to create dynamic services. NFV, by contrast, through its ETSI standardization group is focused exclusively on wireless networks but, in the process to release its first standard is still very incomplete in its architecture, interfaces and implementation.

SDN can or not comprise NFV elements and NFV can or not be governed or architected using SDN. Many of the Proof of Concepts (PoC) examined in this document are attempting to map SDN architecture and NFV functions in the hope to bridge the gap. Both frameworks can be complementary, but they are both suffering from growing pains and a diverging set of objectives.


The intent is to paint a picture of the state of SDN and NFV implementations in mobile networks. This report describes what has been trialed, deployed in labs, deployed commercially, what are the elements that are likely to be virtualized first, what are the timeframes, what are the strategies and the main players.

Tuesday, September 9, 2014

SDN & NFV part VI: Operators, dirty your MANO!

While NFV in ETSI was initially started by network operators in their founding manifesto, in many instances, we see that although there is a strong desire to force telecoms appliance commoditization, there is little appetite by the operators to perform the sophisticated integration necessary for these new systems to work.

This is, for instance, reflected in MANO, where operators seem to have put back the onus on vendors to lead the effort. 

Some operators (Telefonica, AT&T, NTT…) seem to invest resources not only in monitoring the process but also in actual development of the technology, but by and large, according to my study,  MNOs seem to have taken a passenger seat to NFV implementations efforts. Many vendors note that MNOs tend to have a very hands off approach towards the PoCs they "participate" in, offering guidance, requirements or in some cases, just lending their name to the effort without "getting their hands dirty".

The Orchestrator’s task in NFV is to integrate with OSS/BSS and to manage the lifecycle of the VNFs and NFVI elements. 

It onboards new network services and VNFs and it performs service chaining in the sense that it decides through which VNF, in what order must the traffic go through according to routing rules and templates. 

These routing rules are called forwarding graphs. Additionally, the Orchestrator performs policy management between VNFs. Since all VNFs are proprietary, integrating them within a framework that allows their components to interact is a huge undertaking. MANO is probably the part of the specification that is the least mature today and requires the most work.


Since it is the brain of the framework, failure of MANO to reach a level of maturity enabling consensus between the participants of the ISG will inevitably relegate NFV to vertical implementations. This could lead to a network with a collection of vertically virtualized elements, each having their own MANO, or very high level API abstractions, reducing considerably overall system elasticity and programmability. SDN OpenStack-based models can be used for MANO orchestration of resources (Virtualized Infrastructure Manager) but offer little applicability in the pure orchestration and VNF management field beyond the simplest IP routing tasks.


Operators who are serious about NFV in wireless networks should seriously consider develop their own orchestrator or at the minimum implement strict orchestration guidelines. They could force vendors to adopt a minimum set of VNF abstraction templates for service chaining and policy management.