Showing posts with label service enablement. Show all posts
Showing posts with label service enablement. Show all posts

Wednesday, April 16, 2025

Is AI-RAN the future of telco?

 AI-RAN has emerged recently as an interesting evolution of telecoms networks. The Radio Access Network (RAN) has been undergoing a transformation over the last 10 years, from a vertical, proprietary highly concentrated market segment to a disaggregated, virtualized, cloud native ecosystem.

Product of the maturation of a number of technologies, including telco cloudification, RAN virtualization and open RAN and lately AI/ML, AI-RAN has been positioned as a means to disaggregate and open up further the RAN infrastructure.

This latest development has to be examined from an economic standpoint. RAN accounts roughly for 80% of a telco deployment (excluding licenses, real estate...) costs. 80% of these costs are roughly attributable to the radios themselves and their electronics. The market is dominated by few vendors and telecom operators are exposed to substantial supply chain risks and reduced purchasing power.

The AI RAN alliance was created in 2024 to accelerate its adoption. It is led by network operators (T-Mobile, Softbank, Boost Mobile, KT, LG Uplus, SK Telecom...) telecom and IT vendors (Nvidia, arm, Nokia, Ericsson Samsung, Microsoft, Amdocs, Mavenir, Pure Storage, Fujitsu, Dell, HPE, Kyocera, NEC, Qualcomm, Red Hat, Supermicro, Toyota...).

If you are familiar with this blog, you already know of the evolution from RAN to cloud RAN and Open RAN, and more recently the forays into RAN intelligence with the early implementations of near and non real time RAN Intelligence Controller (RIC)

AI-RAN goes one step further in proposing that the specialized electronics and software traditionally embedded in RAN radios be deployed on high compute, GPU based commercial off the shelf servers and that these GPUs manage the complex RAN computation (beamforming management, spectrum and power optimization, waveform management...) and double as a general high compute environment for AI/ML applications that would benefit from deployment in the RAN (video surveillance, scene, object, biometrics recognition, augmented / virtual reality, real time digital twins...). It is very similar to the edge computing early market space.

The potential success of AI-RAN relies on a number of techno / economic assumptions:

For Operators:

  • It is desirable to be able to deploy RAN management, analytics, optimization, prediction, automation algorithms in a multivendor environment that will provide deterministic, programmable results.
  • Network operators will be able and willing to actively configure, manage and tune RAN parameters.
  • Deployment of AI-RAN infrastructure will be profitable (combination of compute costs being offloaded by cost reduction by optimization and new services opportunities).
  • AI-RAN power consumption, density, capacity, performance will exceed traditional architectures in time.
  • Network Operator will be able to accurately predict demand and deploy infrastructure in time and in the right locations to capture it.
  • Network Operators will be able to budget the CAPEX / OPEX associated with this investment before revenue materialization.
  • An ecosystem of vendors will develop that will reduce supply chain risks

For vendors:

  • RAN vendors will open their infrastructure and permit third parties to deploy AI applications.
  • RAN vendors will let operators and third parties program the RAN infrastructure.
  • There is sufficient market traction to productize AI-RAN.
  • The rate of development of AI and GPU technologies will outpace traditional architecture.
  • The cost of roadmap disruption and increased competition will be outweighed by the new revenues or is the cost to survive.
  • AI-RAN represents an opportunity for new vendors to emerge and focus on very specific aspects of the market demand without having to develop full stack solutions.

For customers:

  • There will be a market and demand for AI as a Service whereas enterprises and verticals will want to use a telco infrastructure that will provide unique computing and connectivity benefits over on-premise or public cloud solutions.
  • There are AI/ML services that (will) necessitate high performance computing environments, with guaranteed, programmable connectivity with a cost profile that is better mutualized through a multi tenant environment
  • Telcom operators are the best positioned to understand and satisfy the needs of this market
  • Security, privacy, residency, performance, reliability will be at least equivalent to on premise or cloud with a cost / performance benefit. 
As the market develops, new assumptions are added every day. The AI-RAN alliance has defined three general groups to create the framework to validate them: 
  1. AI for RAN: AI to improve RAN performance. This group focuses on how to program and optimize the RAN with AI. The expectations is that this work will drastically reduce the cost of RAN, while allowing sophisticated spectrum, radio waves and traffic manipulations for specific use cases.
  2. AI and RAN: Architecture to run AI and RAN on the same infrastructure. This group must find the multitenant architecture allowing the system to develop into a platform able to host a variety of AI workloads concurrently with the RAN. 
  3. AI on RAN: AI applications to run on RAN infrastructure. This is the most ambitious and speculative group, defining the requirements on the RAN to support the AI workloads that will be defined
As for Telco Edge Computing, and RAN intelligence, while the technological challenges appear formidable, the commercial and strategic implications are likely to dictate whether AI RAN will succeed. Telecom operators are pushing for its implementation, to increase control over spending, and user experience of the RAN, while possibly developing new revenue with the diffusion of AIaaS. Traditional RAN vendors see the nascent technology as further threat to their capacity to sell programmable networks as black boxes, configured, sold and operated by them. New vendors see the opportunity to step into the RAN market and carve out market share at the expense of legacy vendors.

Friday, August 16, 2024

Rant: Why do we need 6G anyway?


I have to confess that, even after 25 years in the business, I am still puzzled by the way we build mobile networks. If tomorrow we were to restart from scratch, with today's technology and knowledge of the market, we would certainly design and deploy them in a very different fashion.

Increasingly, mobile network operators (MNOs) have realized that the planning, deployment and management of the infrastructure is a fundamentally different business than the development and commercialization of the associated connectivity services. They follow different investment and amortization cycle and have very different economic and financial profiles. For this reason, investors value network infrastructure differently from digital services and many MNOs have decided to start separating their fibre, antennas, radio assets from their commercial operation.

This has resulted in a flurry of splits, spin off, divestiture and the growth of tower and infrastructure specialized companies. If we follow this pattern to its logical conclusion, looking at the failed economics of 5G and the promises of 6G, one has to wonder whether we are on the right path.

Governments keep treating spectrum as a finite, exclusive resource, whereas as private networks and unlicensed spectrum demand is increasing, it is clear that there is a cognitive dissonance in the economic model. If 5G's success was predicated on enterprise, industries and verticals connectivity and if these organizations have needs that cannot be satisfied by the public networks, why would MNOs spend so much money on a spectrum that is unlikely to bring additional revenue? The consumer market does not need another G until new services and devices emerge that mandate different connectivity profiles. Metaverse was a fallacy, autonomous vehicles, robots... are in their infancy and workaround the lack of connectivity adequacy by keeping their compute and sensors on device, rather than at the edge.

As the industry prepares for 6G and its associated future hype and non sensical use cases and fantastical services, one has to wonder how can we stop designing networks for use cases that never emerge as dominant, forcing redesigns and late adaptation. Our track record as an industry is not great there. If you remember, 2G was designed for voice services. Texting was the unexpected killer app. 3G was designed for Push to talk over Cellular, believe it or not (remember SIP and IMS...) and picture messaging early browsing were successful. 4G was designed for Voice over LTE (VoLTE) and video / social media were the key services. 5G was supposed to be designed for enterprise and industry connectivity but failed to deliver so far (late implementation of slicing and 5G Stand Alone). So... what do we do now?

First, the economic model has to change. Rationally, it is not economically efficient for 4 or 5 MNOs to buy spectrum and deploy their separate networks to cover the same population. We are seeing more and more network sharing agreements, but we must go further. In many countries, it makes more sense to have a single neutral infrastructure operator, including the cell sites, radio, the fiber backhaul even edge data centers / central offices all the way but not including the core. This neutral host can have an economic model based on wholesale and the MNOs can focus on selling connectivity products.

Of course, this would probably suppose some level of governmental and regulatory overhaul to facilitate this model. Obviously, one of the problems here is that many MNOs would have to transfer assets and more importantly personnel to that neutral host, which would undoubtedly see much redundancy from 3 or 4 teams to one. Most economically advanced countries have unions protecting these jobs, so this transition is probably impossible unless a concerted effort to cap hires / not renew retirement departures / retrain people is effected over many years...

The other part of the equation is the connectivity and digital services themselves. Let's face it, connectivity differentiation has mostly been a pricing and bundling exercise to date. MNOs have not been overly successful with the creation and sale of digital services, the emergence of social media, video streaming services having occupied most of the consumer's interest. On the enterprise's side a large part of the revenue is related to the exploitation of the last mile connectivity, with the sale of secure private connections on public networks in the form of MPLS first then SD-WAN to SASE and cloud interconnection as the main services. Gen AI promises to be the new shining beacon of advanced services, but in truth, there is very little there in the short term in terms of differentiation for MNOs. 

There is nothing wrong with being a very good, cost effective, performant utility connectivity provider. But most markets can probably accommodate only one or two of these. Other MNOs, if they want to survive, must create true value in the form of innovative connectivity services. This supposes not only a change of mindset but also skill set. I think MNOs need to look beyond the next technology, the next G and evolve towards a more innovative model. I have worked on many of these, from the framework to the implementation and systematic creation of sustainable competitive advantage. It is quite different work from standards and technology evolution approach favored by MNOs but necessary for these seeking to escape the utility model.

In conclusion, 6G or technological improvements in speed, capacity, coverage, latency... are unlikely to solve the systemic economical and differentiation problem for MNOs unless more effort is put on service innovation and radical infrastructure sharing.

Thursday, August 8, 2024

The journey to automated and autonomous networks

 

The TM Forum has been instrumental in defining the journey towards automation and autonomous telco networks. 

As telco revenues from consumers continue to decline and the 5G promise to create connectivity products that enterprises, governments and large organizations will be able to discover, program and consume remains elusive, telecom operators are under tremendous pressure to maintain profitability.

The network evolution started with Software Defined Networks, Network Functions Virtualization and more recently Cloud Native evolution aims to deliver network programmability for the creation of innovative on-demand connectivity services. Many of these services require deterministic connectivity parameters in terms of availability, bandwidth, latency, which necessitate end to end cloud native fabric and separation of control and data plane. A centralized control of the cloud native functions allow to abstract resource and allocate them on demand as topology and demand evolve.

A benefit of a cloud native network is that, as software becomes more open and standardized in a multi vendor environment, many tasks that were either manual or relied on proprietary interfaces can now be automated at scale. As layers of software expose interfaces and APIs that can be discovered and managed by sophisticated orchestration systems, the network can evolve from manual, to assisted, to automated, to autonomous functions.


TM Forum defines 5 evolution stages from full manual operation to full autonomous networks.

  • Condition 0 - Manual operation and maintenance: The system delivers assisted monitoring capabilities, but all dynamic tasks must be 0 executed manually
  • Step 1 - Assisted operations and maintenance: The system executes a specific, repetitive subtask based on pre-configuration, which can be recorded online and traced, in order to increase execution efficiency.
  • Step 2: - Partial autonomous network: The system enables closed-loop operations and maintenance for specific units under certain external environments via statically configured rules.
  • Step 3 - Conditional autonomous network: The system senses real-time environmental changes and in certain network domains will optimize and adjust itself to the external environment to enable, closed-loop management via dynamically programmable policies.
  • Step 4 - Highly autonomous network: In a more complicated cross-domain environment, the system enables decision-making based on predictive analysis or active closed-loop management of service-driven and customer experience-driven networks via AI modeling and continuous learning.
  • Step 5 - Fully autonomous network: The system has closed-loop automation capabilities across multiple services, multiple domains (including partners’ domains) and the entire lifecycle via cognitive self-adaptation.
After describing the framework and conditions for the first 3 steps, the TM Forum has recently published a white paper describing the Level 4 industry blueprints.

The stated goals of level 4 are to enable the creation and roll out of new services within 1 week with deterministic SLAs and the delivery of Network as a service. Furthermore, this level should allow fewer personnel to manage the network (1000's of person-year) while reducing energy consumption and improving service availability.

These are certainly very ambitious objectives. The paper goes on to describe "high value scenarios" to guide level 4 development. This is where we start to see cognitive dissonance creeping in between the stated objectives and the methodology.  After all, much of what is described here exists today in cloud and enterprise environments and I wonder whether Telco is once again reinventing the wheel in trying to adapt / modify existing concepts and technologies that are already successful in other environments.

First, the creation of deterministic connectivity is not (only) the product of automation. Telco networks, in particular mobile networks are composed of a daisy chain of network elements that see customer traffic, signaling, data repository, look up, authentication, authorization, accounting, policy management functions being coordinated. On the mobile front, the signal effectiveness varies over time, as weather, power, demand, interferences, devices... impact the effective transmission. Furthermore, the load on the base station, the backhaul, the core network and the  internet peering point also vary over time and have an impact on its overall capacity. As you understand, creating a connectivity product with deterministic speed, latency capacity to enact Network as a Service requires a systemic approach. In a multi vendor environment, the RAN, the transport, the core must be virtualized, relying on solid fiber connectivity as much as possible to enable the capacity and speed. The low latency requires multiple computing points, all the way to the edge or on premise. The deterministic performance requires not only virtualization and orchestration of the RAN, but also the PON fiber and end to end slicing support and orchestration. This is something that I led at Telefonica with an open compute edge computing platform, a virtualized (XGS) PON on a ONF ONOS VOLTHA architecture with an open virtualized RAN. This was not automated yet, as most of these elements were advanced prototype at that stage, but the automation is the "easy" part once you have assembled the elements and operated them manually for enough time. The point here is that deterministic network performances is attainable but still a far objective for most operators and it is a necessary condition to enact NaaS, before even automation and autonomous networks.

Second, the high value scenarios described in the paper are all network-related. Ranging from network troubleshooting, to optimization and service assurance, these are all worthy objectives, but still do not feel "high value" in terms of creation of new services. While it is natural that automation first focuses on cost reduction for roll out, operation, maintenance, healing of network, one would have expected more ambitious "new services" description.

All in all, the vision is ambitious, but there is still much work to do in fleshing out the details and linking the promised benefits to concrete services beyond network optimization.

Tuesday, February 2, 2016

How to Binge On?

So... you have been surprised, excited,. curious about T-Mobile US Binge On launch
The innovative service is defining new cooperative models with so-called OTT by blending existing and new media manipulation technologies.

You are maybe wondering whether it would work for you? Do you know what it would take for you to launch a similar service?
Here is a quick guide of what you might need if you are thinking along those lines.


The regulatory question

First, you probably wonder whether you can even launch such a service. Is it contravening any net neutrality rule? The answer might be hard to find. Most net neutrality provisions are vague, inaccurate or downright technologically impossible to enforce, so when launching a new service, the best one can have is an opinion.
MNOs have essentially two choices, either not innovating and launching endless minute variations of existing services or launching innovative services. The latter strategy will always have a measure of risk, but MNOs can't aspire to be disruptive without risk taking. In this case, the risk is fairly limited, provided that the service is voluntary, with easy opt in /  opt out. There are always going to be challenges - even legal - to that operating assumption, but operators have to accept that as part of the cost of innovation. In other words, if you want to create new revenues streams, you have to grow some balls and take some risks, otherwise, just be a great network and abandon ambition to sell services.


The service

For those not familiar with Binge On, here is a quick overview. Binge On allows any new or existing subscribers with a 3GB data plan or higher to stream for free videos from over 4o popular content providers including Netflix, Hulu, HBO and ESPN.
The videos are zero rated (do not count towards the subscriber's quota) and are limited to 480p definition.
The service is free.

The content

Obviously, in the case of Binge On, the more content providers with rich content sign on for the service, the richer and the more attractive the offering. T-Mobile has been very smart to entice some of the most popular video services to sign on for Binge on. Netflix and HBO have a history of limited collaboration with few network operators, but no MNO to date has been able to create such a rich list of video partnerships.
Experience proves that the key to successful video services is breadth, depth and originality of the content. In this case, T-Mobile has decided not to intervene in content selection, simply allowing some of the most popular video services to participate in the service.
Notably, Facebook, Twitter, Google, Apple and Amazon properties are missing, with YouTube claiming technical incompatibility to participate.

The technology

What does the service entails technically? The first functionality a network needs to enable such a service is to discriminate content from a participating video provider versus other services. In some cases, when traffic is not encrypted, it is just a matter of creating a rule in the DPI or web gateway engine to apply zero rating to a specific content / origin. 

Picking out Netflix traffic out of the rest of the videos is not necessarily simple, since many premium video service providers deliver their service over encrypted protocols, to avoid piracy or privacy issues. The result is certainly that there is a level of integration that is necessary for the network to unambiguously detect a video session from Netflix. 

In this case, unencrypted metadata can be used in the headers to identify the service provider and even the content. That is not all, though as conceivably, some services might not be exclusively video. If we imagine a service like Facebook being part of Binge one, the network now needs to theoretically separate browsing traffic from video. This can be achieved with traffic management platforms that are usually deploying heuristics or algorithm to segregate traffic from a same source looking at packet size, session duration, packet patterns, etc.

Now that you are able to discriminate the content from participating partners, you need to tie it to subscribers that have opted in or opted out for this service. This usually is performed in the PCRF charging function or in the EPC where the new service is created. A set of rules are assembled to associate the list of content providers with a zero-rated class of service and associate a subscriber class with these services. The subscriber class is a toggled setting in the subscriber profile that resides in the subscriber database. As a subscriber starts a HBO episode, the network detects that this service is part of Binge on and looks up whether that user is subscribed or not to the service and applies the corresponding rate code. As a result, the amount of data consumed for this session is either accumulated and deduced from the subscriber's quota or not depending on whether she is a Binge On user.

We are almost done.

The big gamble taken by T-Mobile is that customers will trade unlimited quality for unlimited content. Essentially, the contract is that those who opt in Binge On will be able to stream unlimited video from participating providers at the condition that the video's definition is limited to 480p. In many cases, this is an acceptable quality for phones and tablets, as long as you do not hotspot the video to a laptop or a TV.
That limitation is the quid pro quo that T-Mobile is enforcing, allowing them to be able to have cost and service quality predictability.

That capability requires more integration between content provider and T-Mobile. 480p is an objective video display target that is usually describing a 640 x 480 pixels picture size. Videos encoded at that definition will vary in size, depending on the codec used, the number of frames per seconds and other parameters.

Most premium video providers in Binge ON are delivering them using adaptive bit rate, essentially delivering a number of possible video streams ranging from low to high definition. In this case, T-Mobile and the content provider have to limit the format up to 480p. This could be done by the content provider, of course, since it has all the formats. They could decide to send only 480p and lower versions, but that would be counter productive. The content provider does not know whether the subscriber is opted in to Binge On or not and that information that belongs to T-Mobile cannot be freely shared.
As a result, content providers are sending the video in their usual definition, leaving T-Mobile with the task to select the right format.

There are several ways to achieve that. The simplistic approach is just to limit the delivery bit rate so that the phone can never select more than 480p. This is a hazardous approach, because 480p encoding can result in bit rate delivery demand ranging from 700 to 1.5 Mbps depending on the codec being used. This is too wide to provide any guarantee by T-Mobile. Set the setting too low and some providers will never achieve 480 p. Set it too high and subscribers will have fluctuating quality with even 720 or 1080p formats.
The best way to achieve the desired result is to intercept the adaptive bit rate manifest delivered by the content provider at the establishment of the session and strip out all definitions above 480p. This guarantees that the video will never be delivered above 480p but can still fluctuate based on network's congestion. This can be achieved either with a specialized video optimization platform or in some of the more advanced EPC.

As we can see, the service is sophisticated and entails several steps. A network's capacity to deploy such a service is directly linked to its ability to link and instantiate services and network functions in an organic manner. Only the most innovative EPC, traffic detection and video management functions vendors can provide the flexibility and cost effectiveness to launch such a service.


Wednesday, August 12, 2015

The orchestrator conundrum in SDN and NFV

We have seen over the last year a flurry of activity around orchestration in SDN and NFV. As I have written about here and here, orchestration is a key element and will likely make or break SDN and NFV success in wireless.

A common mistake associated with orchestration is that it covers the same elements or objectives in SDN and NFV. It is a great issue, because while SDN orchestration is about resource and infrastructure management, NFV should be about service management. There is admittedly a level of overlap, particularly if you define services as both network and customer sets of rules and policies.

To simplify, here we'll say that SDN orchestration is about resource allocation, virtual, physical and mixed infrastructure auditing, insurance and management, while NFV's is about creating rules for traffic and service instantiation based on subscriber, media, origin, destination, etc...

The two orchestration models are complementary (it is harder to create and manage services if you do not have visibility / understanding of available resources and conversely, it can be more efficient to manage resource knowing what services run on them) but not necessarily well integrated. A bevy of standards and open source organizations (ETSI ISG NFV, OPNFV, MEF, Openstack, Opendaylight...) are busy trying to map one with another which is no easy task. SDN orchestration is well defined in term of its purview, less so in term of implementation, but a few models are available to experiment on. NFV is in its infancy, still defining what the elements of service orchestration are, their proposed interfaces with the infrastructure and the VNF and generally speaking how to create a model for service instantiation and management.

For those who have followed this blog and my clients who have attended my SDN and NFV in wireless workshop, it is well known that the management and orchestration (MANO) area is under intense scrutiny from many operators and vendors alike.
Increasingly, infrastructure vendors who are seeing the commoditization of their cash cow understand that the brain of tomorrow's network will be in MANO.
Think of MANO as the network's app store. It controls which apps (VNFs) are instantiated, what level of resource is necessary to manage them and stitch (service chaining) VNF together to create services.
The problem, is that MANO is not yet defined by ETSI, so anyone who wants to orchestrate VNFs today either is building its own or is stuck with the handful of vendors who are providing MANO-like engine. Since MANO is ill-defined, the integration requires a certain level of proprietary effort. Vendors will say that it is all based on open interfaces, but the reality is that there is no mechanism in the standard today for a VNF to declare its capabilities, its needs and its intent, so a MANO integration requires some level of abstraction or deep fine tuning,
As a result, MANO can become very sticky if deployed in an operator network. The VNFs can come and go and vendors can be swapped at will, but the MANO has the potential to be a great anchor point.
It is not a surprise therefore to see vendors investing heavily in this field or acquiring the capabilities:

  • Cisco acquired TailF in 2014
  • Ciena acquired Cyan this year
  • Cenx received 12,5m$ in funding this year...

At the same time, Telefonica has launched an open source collaborative effort called openMANO to stimulate the industry and reduce risks of verticalization of infrastructure / MANO vendors.

For more information on how SDN and NFV are implemented in wireless networks, vendors and operators strategies, look here.

Monday, February 23, 2015

The future is cloudy: NFV 2020 part II

I have received some comments after my previous post arguing that maybe the future of SDN and NFV is not as far as I am predicting. As we are all basking in the pre Mobile World Congress excitement, inundated by announcements from vendors and operators alike trying to catch the limelight before the deafening week begins, I thought I would clarify some of my thoughts.

We have seen already this week some announcements of virtualization plans, products and even deployments.

One of the main problems with a revolutionary approach such SDN and/or NFV implementation is that it suggests a complete network overhaul to deliver its full benefits. In all likeliness, no network operator is able to operate fully these kind of changes in less than a 10 years' timescale, so what to do first?

The choice is difficult, since there are a few use cases that seem easy enough to roll out but deliver little short term benefits (vCPE, some routing and switching functions...) while the projects that should deliver the highest savings, the meaty ones, seem quite far from maturity (EPC, IMS, c-RAN...). Any investment on this front is going to be just that...an investment with little to no return in the short term.

The problem is particularly difficult to solve because most of the value associated with virtualization of mobile networks in the short term is supposedly ties to capex and opex savings. I have previously highlighted this trend and it is not abating, more like accelerating.
Islands of SDN or NFV implementations in a sea of legacy network elements is not going to generate much saving. It could arguably generate new revenue streams if these were used to launch new services, but today’s focus has been so far to emulate and translate physical function and networks into virtualized ones, with little effort in term of new service creation.

As a result, the business case to deploy SDN or NFV in a commercial network today is negative and likely to stay so for the next few years. I expect the momentum to continue, though, since it will have to work and to deliver the expected savings for network operators to stand a chance to stay in business.

The other side of this coin is the service offering.  While flexibility, time to market and capacity to launch new services are always quoted as some of the benefits of network virtualization, it seems that many operators have given up on innovation and service creation. The examples of new services are few and far between and I would hope that these would be the object of more focused efforts.

At last, it seems that maybe one of my previsions will be fulfilled shortly, a friend pointed out that this year's GSMA freebee for its member at the show will be... a selfie stick.

Thursday, January 22, 2015

The future is cloudy: NFV 2020

As the first phase of ETSI ISG NFV wraps up and phase 1's documents are being released, it is a good time to take stock of the progress to date and what lies ahead.

ETSI members have set an ambitious agenda to create a function and service virtualization strategy for broadband networks, aiming at reducing hardware and vendor dependency while creating an organic, automated, programmable network.

The first set of documents approved and published represents a great progress and possibly one of the fastest achievement for a new standard to be rolled out; in only two years. It also highlights how much work is still necessary to make the vision a reality.

Vendors announcements are everywhere, "NFV is a reality, it is happening, it works, you can deploy it in your networks today...". I have no doubt Mobile World Congress will see several "world's first commercial deployment of [insert your vLegacyProduct here]...". The reality is a little more nuanced.

Network Function Virtualization, as a standard does not allow today a commercial deployment out of the box. There are too many ill-defined interfaces, competing protocols, missing API to make it plug and play. The only viable deployment scenario today is from single vendor or tightly integrated (proprietary) dual vendor strategies for silo services / functions. From relatively simple (Customer Premise Equipment) to very complex (Evolved Packet Core), it will possible to see commercial deployments in 2015, but they will not be able to illustrate all the benefits of NFV.

As I mentioned before, orchestration, integration with SDN, performance, security, testing, governance... are some of the challenges that remain today for viable commercial deployment of NFV in wireless networks. These are only the technological challenges, but as mentioned before, operational challenges to evolve and train the workforce at operators is probably the largest challenge.

From my many interactions and interviews with network operators, it is clear that there are several different strategies at play.

  1. The first strategy is to roll out a virtualized function / service with one vendor, after having tested, integrated, trialed it. It is a strategy that we are seeing a lot in Japan or Korea, for instance. It provides a pragmatic learning process towards implementing virtualized function in commercial networks, recognizing that standards and vendors implementations will not be fully interoperable before a few years.
  2. The second strategy is to stimulate the industry by standards and forum participation, proof of concepts, and even homegrown development. This strategy is more time and resource-intensive but leads to the creation of an ecosystem. No big bang, but an evolutionary, organic roadmap that picks and chooses which vendor, network element, services are ready for trial, poc, limited and commercial deployment. The likes of Telefonica and Deutsche Telekom are good examples of this approach.
  3. The third strategy is to define very specifically the functions that should be virtualized, their deployment, management and maintenance model and select a few vendors to enact this vision. AT&T is a good illustration here. The advantage is probably to have a tailored experience that meets their specific needs in a timely fashion before standards completion, the drawback being the flexibility as vendors are not interchangeable and integration is somewhat proprietary.
  4. The last strategy is not a strategy, it is more a wait and see approach. Many operators do not have the resource or the budget to lead or manage this complex network and business transformation. they are observing the progress and placing bets in term of what can be deployed when.
As it stands, I will continue monitoring and chairing many of the SDN / NFV shows this year. My report on SDN / NFV in wireless networks is changing fast, as the industry is, so look out for updates throughout 2015.

Tuesday, October 21, 2014

Report from SDN / NFV shows part II

Today, I would like to address what, in my mind, is a fundamental issue with the expectations raised by SDN/NFV in mobile networks.
I was two weeks ago in Dallas, speaking at SDN NFV USA and the Telco Cloud forum.

While I was busy avoiding bodily fluids with everyone at the show, I got the chance to keynote a session (slides here) with Krish Prabhu, CTO of AT&T labs.

Krish explains that the main driver for the creation and implementation of Domain 2.0 is the fact that the company CAPEX while staggering at $20 billion per year is not likely to significantly increase, while traffic (used here as a proxy for costs) will increase at a minimum of 50% compounded annual growth rate for the foreseeable future.
Krish, then to lament:
"Google is making all the money, we are making all the investment, we have no choice but to squeeze our vendors and re architect the network."
Enter SDN / NFV.
Really? These are the only choices? I am a little troubled by the conclusions here. My understanding is that Google, Facebook, Netflix, in short the OTT providers have usually looked at creating services and value for their subscribers and then, when faced with unique success had to invent new technologies to meet their growth challenges.

Most of the rhetoric surrounding operators' reasons for exploring SDN NFV nowadays seem to be about cost reduction. It is extremely difficult to get an operator to articulate what type of new service they would launch if  their network was fully virtualized and software-defined today. You usually get the salad of existing network functions with the newly adorned "v". vBRAS, vFirewall, vDPI, vCPE, vEPC...
While I would expect these network functions to lend themselves to virtualization, they do not create new services or necessarily more value. A cheaper way to create, deploy, manage a firewall is not a new service.

The problem seems to be that our industry is again tremendously technology-driven, rather than customer-driven. Where are the marketers, the service managers who will invent, for instance, real-time voice translation services by virtualizing voice processing, translation functions in the phone and at the edge? There are hundred of new services to be invented, I am sure SDN NFV will help realize them. I bet Google is closer to enable this use case than most mobile network operators. That is a problem, because operators can still provide value if they innovate, but innovation must come first from services, not technology. We should focus on what first, how after.
End of the rant, more techno posts soon. If you like this, don't forget to buy the report.

Tuesday, September 30, 2014

NFV & SDN 2014: Executive Summary


This Post is extracted from my report published October 1, 2014. 

Cloud and Software Defined Networking have been technologies explored successively in academia, IT and enterprise since 2011 and the creation of the Open Networking Foundation. 
They were mostly subjects of interest relegated to science projects in wireless networks until, in the fall of 2013, a collective of 13 mobile network operators co-authored a white paper on Network Functions Virtualization. This white paper became a manifesto and catalyst for the wireless community and was seminal to the creation of the eponymous ETSI Industry Standardization Group. 
Almost simultaneously, AT&T announced the creation of a new network architectural vision – Domain 2.0, heavily relying on SDN and NFV as building blocks for its next generation mobile network.

Today, SDN and NFV are hot topics in the industry and many companies have started to position themselves with announcements, trials, products and solutions.

 This report is the result of hundreds of interviews, briefings and meetings with many operators and vendors active in this field. In the process, I have attended, participated, chaired various events such as OpenStack, ETSI NFV ISG, SDN & OpenFlow World Congress and became a member at ETSI, OpenStack and TM Forum.
The Open Network Foundation, the Linux Foundation, OpenStack, the OpenDaylight project, IEEE, ETSI, the TM Forum are just a few of the organizations who are involved in the definition, standardization or facilitation of cloud, SDN and NFV. This report provides a view on the different organizations contribution and their progress to date.

Unfortunately, there is no such thing as SDN-NFV today. These are technologies that have overlaps and similarities but stand apart widely. Software Defined Network is about managing network resources. It is an abstraction that allows the definition and management of IP networks in a new fashion. It separates data from control plane and allows network resources to be orchestrated and used across applications independently of their physical location. SDN exhibits a level of maturity through a variety of contributions to its leading open-source contribution community, OpenStack. In its ninth release, the architectural framework is well suited for abstracting cloud resources, but is dominated by enterprise and general IT interests, with little in term of focus and applicability for wireless networks.

Network Function Virtualization is about managing services. It allows the breaking down and instantiation of software elements into virtualized entities that can be invoked, assembled, linked and managed to create dynamic services. NFV, by contrast, through its ETSI standardization group is focused exclusively on wireless networks but, in the process to release its first standard is still very incomplete in its architecture, interfaces and implementation.

SDN can or not comprise NFV elements and NFV can or not be governed or architected using SDN. Many of the Proof of Concepts (PoC) examined in this document are attempting to map SDN architecture and NFV functions in the hope to bridge the gap. Both frameworks can be complementary, but they are both suffering from growing pains and a diverging set of objectives.


The intent is to paint a picture of the state of SDN and NFV implementations in mobile networks. This report describes what has been trialed, deployed in labs, deployed commercially, what are the elements that are likely to be virtualized first, what are the timeframes, what are the strategies and the main players.

Tuesday, September 9, 2014

SDN & NFV part VI: Operators, dirty your MANO!

While NFV in ETSI was initially started by network operators in their founding manifesto, in many instances, we see that although there is a strong desire to force telecoms appliance commoditization, there is little appetite by the operators to perform the sophisticated integration necessary for these new systems to work.

This is, for instance, reflected in MANO, where operators seem to have put back the onus on vendors to lead the effort. 

Some operators (Telefonica, AT&T, NTT…) seem to invest resources not only in monitoring the process but also in actual development of the technology, but by and large, according to my study,  MNOs seem to have taken a passenger seat to NFV implementations efforts. Many vendors note that MNOs tend to have a very hands off approach towards the PoCs they "participate" in, offering guidance, requirements or in some cases, just lending their name to the effort without "getting their hands dirty".

The Orchestrator’s task in NFV is to integrate with OSS/BSS and to manage the lifecycle of the VNFs and NFVI elements. 

It onboards new network services and VNFs and it performs service chaining in the sense that it decides through which VNF, in what order must the traffic go through according to routing rules and templates. 

These routing rules are called forwarding graphs. Additionally, the Orchestrator performs policy management between VNFs. Since all VNFs are proprietary, integrating them within a framework that allows their components to interact is a huge undertaking. MANO is probably the part of the specification that is the least mature today and requires the most work.


Since it is the brain of the framework, failure of MANO to reach a level of maturity enabling consensus between the participants of the ISG will inevitably relegate NFV to vertical implementations. This could lead to a network with a collection of vertically virtualized elements, each having their own MANO, or very high level API abstractions, reducing considerably overall system elasticity and programmability. SDN OpenStack-based models can be used for MANO orchestration of resources (Virtualized Infrastructure Manager) but offer little applicability in the pure orchestration and VNF management field beyond the simplest IP routing tasks.


Operators who are serious about NFV in wireless networks should seriously consider develop their own orchestrator or at the minimum implement strict orchestration guidelines. They could force vendors to adopt a minimum set of VNF abstraction templates for service chaining and policy management.

Tuesday, July 1, 2014

Mobile network 2030





It is summer, nice and warm. England and Italy are out of the world cup, France will beat Germany on Friday, then Brazil and Argentina in the coming weeks to obtain their second FIFA trophy. It sounds like a perfect time for a little daydreaming and telecom fiction...

The date is February 15, 2030

The mobile world congress is a couple of weeks away and has returned to Cannes, as the attendance and indeed the investments in what used to be mobile networks have reduced drastically over the last few years. Finished are the years of opulence and extravagant launches in Barcelona, the show now looks closer to a medium sized textile convention than the great mass of flashy technology and gadgets it used to be in its heyday. 

When did it start to devolve? What was the signal that killed what used to be a trillion dollar industry in the 90's and early 2000's. As usual, there is not one cause but a sort of convergence of events that took a momentum that few saw coming and fewer tried to stop. 

Net neutrality was certainly one of these events. If you remember, back in 2011, people started to realize the level of penetration fixed and wireless networks were exposed to from legal and illegal interception. Following the various NSA scandals, public pressure mounted to protect digital privacy. 
In North America, the battle was fierce between pro and con neutrality, eventually leading to a status quo of sorts, with many content providers and network operators in an uneasy collaborative dynamic. Originally, content providers unwilling to pay for traffic delivery in wireless networks attempted to secure superior user experience by implementing increasingly bandwidth hungry apps. When these started to come in contention for network resources, carriers started to step in and aggressively throttle, cap or otherwise "optimize" traffic. In reaction, premium content providers moved to an encrypted traffic model as a means to obfuscate traffic and prevent interception, mitigation and optimization by carriers. Soon enough, though, the encryption-added costs and latency proved impractical. Furthermore, some carriers started to throttle and cap all traffic equally, claiming to adhere to the letter of net neutrality, which ended up having a terrible effect on  user experience. In the end cooler heads prevailed and content providers and carriers created integrated video networks, where transport, encryption and ad insertion were performed at the edge, while targeting, recommendation, fulfillment ended up in the content provider's infrastructure. 

In Europe, content and service providers saw at the same time "net neutrality" as the perfect excuse to pressure political and regulatory organizations to force network providers to deliver digital content unfiltered, un-prioritized at best possible effort. The result ended up being quite disastrous, as we know, with content being produced mostly outside Europe and encrypted, operators became true utility service providers. They discovered overnight that their pipes could become even dumber than they were.

Of course, the free voice and texting services launched by some of the 5G licensees new entrants in the 2020's accelerated the trend and nationalization of many of the pan European network operator groups.

The transition was relatively easy, since many had transcended to full virtual networks and contracted ALUSSON the last "european" Telecom Equipment Manufacturer to manage their networks. After they had spent collectively over 100 billion euros to virtualize it in the first place, ALUSSON emerged as the only clear winner of the cost benefits brought by virtualization. 
Indeed, virtualization was attractive and very cost effective on paper but proved very complex and organizationally intensive to implement in the end. Operators had miscalculated their capacity to shift their workforce from telecom engineering to IT when they found out that the skill-set to manage their networks always had been in the vendors' hands. Few groups were able to massively retool their workforce, if you remember the great telco strikes of 2021-2022.
In the end, most ended up contracting and transitioning their assets to their network vendor. Obviously, liberated from the task of managing their network, most were eager to launch new services, which was one of the initial rationale for virtualization. Unfortunately, they found out that service creation was much better implemented by small, agile, young entrepreneurial structures than large, unionized, middle aged ones... With a couple of notable exceptions, broadband networks were written off as broadband access was written in the European countries' constitutions and networks aggregated at the pan European level to become pure utilities when they were not downright nationalized.

Outside Europe and North America, Goopple and HuaTE dominate, after voraciously acquiring licenses in emerging countries, ill-equipped to negotiate the long term values of these licenses versus the free network infrastructures these companies provided. The launch of their proprietary SATERR (Satellite Aerial Terrestrial Relay) technology proved instrumental to creating the first fully vertical service /network/ content / device conglomerates.  

Few were the operators who have been able to discern the importance of evolving their core asset "enabling communication" into a dominant position in their market. Those who have succeeded share a few common attributes:

They realized first that their business was not about counting calls, bites or texts but enabling communication. They first started to think in term of services and not technology and understood that the key was in service enablement. Understating that services come and go and die in a matter of months in the new economy, they strove not to provide the services but to create the platform to enable them.

In some cases, they transitioned to full advertising, personal digital management agency, harnessing big data and analytics to enrich digital services with presence, location, preference, privacy, corporate awareness. This required much changes organizationally, but as it turned out, marketing analyst were much easier and cost effective to recruit than network and telecom engineers. Network management became the toolset, not the vocation. 

In other cases, operators became abstraction layers, enabling content and service providers to better target, advertise, aggregate, obfuscate, disambiguate, contextualize, physical and virtual communication between people and machines.

In all cases they understood that the "value chain" as they used to know it and the consumer need for communication services was better served by an ever changing ecosystem, where there was no "position of strength" and where coopetition was the rule, rather than the exception.