Showing posts with label Open RAN. Show all posts
Showing posts with label Open RAN. Show all posts

Thursday, July 31, 2025

The Orchestrator Conundrum strikes again: Open RAN vs AI-RAN

10 years ago (?!) I wrote about the overlaps and potential conflicts of the different orchestration efforts between SDN and NFV. Essentially, observing that, ideally, it is desirable to orchestrate network resources with awareness of services and that service and resource orchestration should have hierarchical and prioritized interactions, so that a service deployment and lifecycle is managed within resource capacity and when that capacity fluctuates, priorities can be enforced.

Service orchestrators have not really been able to be successfully deployed at scale for a variety a reasons, but primarily due to the fact that this control point was identified early on as a strategic effort for network operators and traditional network vendors. A few network operators attempted to create an open source orchestration model (Open Source MANO), while traditional telco equipment vendors developed their own versions and refused to integrate their network functions with the competition. In the end, most of the actual implementation focused on Virtual Infrastructure Management (VIM) and vertical VNF management, while orchestration remained fairly proprietary per vendor. Ultimately, Cloud Native Network Functions appeared and were deployed in Kubernetes inheriting its native resource management and orchestration capabilities.

In the last couple of years, Open RAN has attempted to collapse RAN Element Management Systems (EMS), Self Organizing Networks (SON) and Operation Support Systems (OSS) with the concept of Service Management and Orchestration (SMO). Its aim is to ostensibly provide a control platform for RAN infrastructure and services in a multivendor environment. The non real time RAN Intelligent Controller (RIC) is one of its main artefacts, allowing the deployment of rApps designed to visualize, troubleshoot, provision, manage, optimize and predict RAN resources, capacity and capabilities.

This time around, the concept of SMO has gained substantial ground, mainly due to the fact that the leading traditional telco equipment manufacturers were not OSS / SON leaders and that Orchestration was an easy target for non RAN vendors wanting to find a greenfield opportunity. 

As we have seen, whether for MANO or SMO, the barriers to adoption weren't really technical but rather economic-commercial as leading vendors were trying to protect their business while growing into adjacent areas.

Recently, AI-RAN as emerged as an interesting initiative, positing that RAN compute would evolve from specialized, proprietary and closed to generic, open and disaggregated. Specifically, RAN compute could see an evolution, from specialized silicon to GPU. GPUs are able to handle the complex calculations necessary to manage a RAN workload, with spare capacity. Their cost, however, greatly outweighs their utility if used exclusively for RAN. Since GPUs are used in all sorts of high compute environments to facilitate Machine Learning, Artificial Intelligence, Large and Small Language Models, Models Training and inference, the idea emerged that if RAN deploys open generic compute, it could be used both for RAN workloads (AI for RAN), as well as workloads to optimize the RAN (AI on RAN and ultimately AI/ML workloads completely unrelated to RAN (AI and RAN).

While this could theoretically solve the business case of deploying costly GPUs in hundreds of thousands of cell site, provided that the compute idle capacity could be resold as GPUaaS or AIaaS, this poses new challenges from a service / infrastructure orchestration standpoint. AI RAN alliance is faced with understanding orchestration challenges between resources and AI workloads

In an open RAN environment. Near real time and non real time RICs deploy x and r Apps. The orchestration of the apps, services and resources is managed by the SMO. While not all App could be categorized as "AI", it is likely that SMO will take responsibility for AI for and on RAN orchestration. If AI and RAN requires its own orchestration beyond K8, it is unlikely that it will be in isolation from the SMO.

From my perspective, I believe that the multiple orchestration, policy management and enforcement points will not allow a multi vendor environment for the control plane. Architecture and interfaces are still in flux, specialty vendors will have trouble imposing their perspective without control of the end to end architecture. As a result, it is likely that the same vendor will provide SMO, non real time RIC and AI RAN orchestration functions (you know my feelings about near real time RIC)

If you make the Venn diagram of vendors providing / investing in all three, you will have a good idea of the direction the implementation will take.

Wednesday, April 16, 2025

Is AI-RAN the future of telco?

 AI-RAN has emerged recently as an interesting evolution of telecoms networks. The Radio Access Network (RAN) has been undergoing a transformation over the last 10 years, from a vertical, proprietary highly concentrated market segment to a disaggregated, virtualized, cloud native ecosystem.

Product of the maturation of a number of technologies, including telco cloudification, RAN virtualization and open RAN and lately AI/ML, AI-RAN has been positioned as a means to disaggregate and open up further the RAN infrastructure.

This latest development has to be examined from an economic standpoint. RAN accounts roughly for 80% of a telco deployment (excluding licenses, real estate...) costs. 80% of these costs are roughly attributable to the radios themselves and their electronics. The market is dominated by few vendors and telecom operators are exposed to substantial supply chain risks and reduced purchasing power.

The AI RAN alliance was created in 2024 to accelerate its adoption. It is led by network operators (T-Mobile, Softbank, Boost Mobile, KT, LG Uplus, SK Telecom...) telecom and IT vendors (Nvidia, arm, Nokia, Ericsson Samsung, Microsoft, Amdocs, Mavenir, Pure Storage, Fujitsu, Dell, HPE, Kyocera, NEC, Qualcomm, Red Hat, Supermicro, Toyota...).

If you are familiar with this blog, you already know of the evolution from RAN to cloud RAN and Open RAN, and more recently the forays into RAN intelligence with the early implementations of near and non real time RAN Intelligence Controller (RIC)

AI-RAN goes one step further in proposing that the specialized electronics and software traditionally embedded in RAN radios be deployed on high compute, GPU based commercial off the shelf servers and that these GPUs manage the complex RAN computation (beamforming management, spectrum and power optimization, waveform management...) and double as a general high compute environment for AI/ML applications that would benefit from deployment in the RAN (video surveillance, scene, object, biometrics recognition, augmented / virtual reality, real time digital twins...). It is very similar to the edge computing early market space.

The potential success of AI-RAN relies on a number of techno / economic assumptions:

For Operators:

  • It is desirable to be able to deploy RAN management, analytics, optimization, prediction, automation algorithms in a multivendor environment that will provide deterministic, programmable results.
  • Network operators will be able and willing to actively configure, manage and tune RAN parameters.
  • Deployment of AI-RAN infrastructure will be profitable (combination of compute costs being offloaded by cost reduction by optimization and new services opportunities).
  • AI-RAN power consumption, density, capacity, performance will exceed traditional architectures in time.
  • Network Operator will be able to accurately predict demand and deploy infrastructure in time and in the right locations to capture it.
  • Network Operators will be able to budget the CAPEX / OPEX associated with this investment before revenue materialization.
  • An ecosystem of vendors will develop that will reduce supply chain risks

For vendors:

  • RAN vendors will open their infrastructure and permit third parties to deploy AI applications.
  • RAN vendors will let operators and third parties program the RAN infrastructure.
  • There is sufficient market traction to productize AI-RAN.
  • The rate of development of AI and GPU technologies will outpace traditional architecture.
  • The cost of roadmap disruption and increased competition will be outweighed by the new revenues or is the cost to survive.
  • AI-RAN represents an opportunity for new vendors to emerge and focus on very specific aspects of the market demand without having to develop full stack solutions.

For customers:

  • There will be a market and demand for AI as a Service whereas enterprises and verticals will want to use a telco infrastructure that will provide unique computing and connectivity benefits over on-premise or public cloud solutions.
  • There are AI/ML services that (will) necessitate high performance computing environments, with guaranteed, programmable connectivity with a cost profile that is better mutualized through a multi tenant environment
  • Telcom operators are the best positioned to understand and satisfy the needs of this market
  • Security, privacy, residency, performance, reliability will be at least equivalent to on premise or cloud with a cost / performance benefit. 
As the market develops, new assumptions are added every day. The AI-RAN alliance has defined three general groups to create the framework to validate them: 
  1. AI for RAN: AI to improve RAN performance. This group focuses on how to program and optimize the RAN with AI. The expectations is that this work will drastically reduce the cost of RAN, while allowing sophisticated spectrum, radio waves and traffic manipulations for specific use cases.
  2. AI and RAN: Architecture to run AI and RAN on the same infrastructure. This group must find the multitenant architecture allowing the system to develop into a platform able to host a variety of AI workloads concurrently with the RAN. 
  3. AI on RAN: AI applications to run on RAN infrastructure. This is the most ambitious and speculative group, defining the requirements on the RAN to support the AI workloads that will be defined
As for Telco Edge Computing, and RAN intelligence, while the technological challenges appear formidable, the commercial and strategic implications are likely to dictate whether AI RAN will succeed. Telecom operators are pushing for its implementation, to increase control over spending, and user experience of the RAN, while possibly developing new revenue with the diffusion of AIaaS. Traditional RAN vendors see the nascent technology as further threat to their capacity to sell programmable networks as black boxes, configured, sold and operated by them. New vendors see the opportunity to step into the RAN market and carve out market share at the expense of legacy vendors.

Monday, March 10, 2025

MWC 25 thoughts

 Back from Mobile World Congress 2025!

I am so thankful I get to meet my friends, clients, ex colleagues year after year and to witness how our industry is moving first hand.

2025 was probably my 23rd congress or so and I always find it invaluable for many reasons. 



Innovation from the East

What stood up for me this year was how much innovation is coming from Asian companies, while most Western companies seem to be focusing on cost control. 

The feeling was pervasive throughout the show and the GLOMO awards winners showed Huawei, ZTE, China Mobile, SK, Singtel… investing in discovering and solving problems that many in Western markets dismiss as futuristic or outside their comfort zone. In mature markets, where price attrition is the rule, differentiation is key.

On a related topic, being Canadian, I can’t help thinking that many companies and regulators who looked at the banning of some Chinese vendors from their markets due to security preoccupations are now finding themselves in the situation to evaluate whether American suppliers do not also represent a risk in the future. 

Without delving into politics, I saw and heard many initiatives to enhance security, privacy, sovereignty, either in the cloud or the supply chain categories. 

Open telco APIs

Open APIs and the progress of telco networks APIs is encouraging, but while it is a good idea, it feels late and lacking in comparison with webscalers tooling and offering to discover, consume, and manage network functions on demand. Much work remains to be done in my opinion to enhance the aaS portion of the offering, particularly if slicing APIs are to be offered. 

Open RAN & RIC

Open RAN threat has successfully accelerated cloud and virtualized RAN adoption. Samsung started the trend and Ericsson’s deployment at AT&T has crystalized the mMIMo +CU+DU+non RT RIC from a main vendor and small cells + rApps from others as a viable option. Vodafone’s RAN refresh should see maybe more players into the mix as Mavenir and Nokia are struggling to gain meaningful market share. 

The Juniper / HPE acquisition drama, together with the Broadcom / VMware commercial strategy seem to have killed the idea of an independent Non RT RIC vendor. Near RT RIC, remains in my mind a flawed proposition as host of 3rd party xApps, and as an expensive gadget for anything else than narrow use cases. 

AI

AI of course, was the belle of the ball at MWC. Everyone had a twist, a demo, a model, an agent but few were able to demonstrate utility beyond automated time series regression as predictions or LLM based natural language processing as nauseam…

Some were convincingly starting to show Small Models that were tailored to their technology, topology and network with promising results. It is still early but it feels that this is where the opportunity lies. The creation and curation of a dataset that can be used to plan, manage, maintain, predict the state of one’s network, with bespoke algorithms seems more desirable than the wholesale vague large and poorly trained models. 

Telco Cloud and Edge computing is having a bit of a moment with AI and GPU aaS strategies being enacted.

All in all, many are trying to develop an AI strategy, and while we are still far from the AI-Native Telco Network, there is some progress and some interesting ventures amidst the noise.

Friday, July 5, 2024

Readout: Ericsson's Mobility Report June 2024

 


It has been a few years now, since Ericsson has taken to provide a yearly report on their view of the evolution of connectivity. Alike Cisco's annual internet report, it provides interesting data points on telecom technology and services' maturity, but focused on cellular technology, lately embracing fixed-wireless access and non terrestrial networks as well. 

In this year's edition, a few elements caught my attention:

  • Devices supporting network slicing are few and far in-between. Only iOS 17 and Android 13 support some capabilities to indicate slicing parameters to their underlying applications. These devices are the higher end latest smartphones, so it is no wonder that 5G Stand Alone is late in delivering on its promises, if end to end slicing is only possible for a small fraction of customers. It is still possible to deploy slicing without device support, but there are limitations, most notably slicing per content / service, while slicing per device or subscriber profile is possible.

  • RedCap (5G reduced Capability) for IoT, wearables, sensors, etc... is making its appearance on the networks, mostly as demo and trials at this stage. The first devices are unlikely to emerge in mass market availability until end of next year.

  • Unsurprisingly, mobile data traffic is still growing, albeit at a lower rate than previously reported with a 25% yearly growth rate or just over 6% quarterly. The growth is mostly due to smartphones and 5G penetration and video consumption, accounting for about 73% of the traffic. This traffic data includes Fixed Wireless Access, although it is not broken down. The rollout of 5G, particularly in mid-band, together with carrier aggregation has allowed mobile network operators to efficiently compete with fixed broadband operators with FWA. FWA's growth, in my mind is the first successful application of 5G as a differentiated connectivity product. As devices and modems supporting slicing appear, more sophisticated connectivity and pricing models can be implemented. FWA price packages differ markedly from mobile data plans. The former are mostly speed based, emulating cable and fibre offering, whereas the latter are usually all you can eat best effort connectivity.

  • Where the traffic growth projections become murky, is with the impact of XR services. Mixed, augmented, virtual reality services haven't really taken off yet, but their possible impact on traffic mix and network load can be immense. XR requires a number of technologies to reach maturity at the same time (bendable / transparent screens, low power, portable, heat efficient batteries, low latency / high compute on device / at the edge, high down/ up link capabilities, deterministic mash latency over an area...) to reach mass market and we are still some ways away from it in my opinion.

  • Differential connectivity for cellular services is a long standing subject of interest of mine. My opinion remains the same: "The promise and business case of 5G was supposed to revolve around new connectivity services. Until now, essentially, whether you have a smartphone, a tablet, a laptop, a connected car, an industrial robot and whether you are a working from home or road warrior professional, all connectivity products are really the same. The only variable are the price and coverage.

    5G was supposed to offer connectivity products that could be adapted to different device types, verticals and industries, geographies, vehicles, drones,... The 5G business case hinges on enterprises, verticals and government adoption and willingness to pay for enhanced connectivity services. By and large, this hasn't happened yet. There are several reasons for this, the main one being that to enable these, a network overall is necessary.

    First, a service-based architecture is necessary, comprising 5G Stand Alone, Telco cloud and Multi-Access Edge Computing (MEC), Service Management and Orchestration are necessary. Then, cloud-native RAN, either cloud RAN or Open RAN (but particularly the RAN Intelligent Controllers - RICs) would be useful. All this "plumbing" to enable end to end slicing, which in turn will create the capabilities to serve distinct and configurable connectivity products.

    But that's not all... A second issue is that although it is accepted wisdom that slicing will create connectivity products that enterprises and governments will be ready to pay for, there is little evidence of it today. One of the key differentiators of the "real" 5G and slicing will be deterministic speed and latency. While most actors of the market are ready to recognize that in principle a controllable latency would be valuable, no one really knows the incremental value of going from variable best effort to deterministic 100, 10 or 5 millisecond latency.

    The last hurdle, is the realization by network operators that Mercedes, Wallmart, 3M, Airbus... have a better understanding of their connectivity needs than any carrier and that they have skilled people able to design networks and connectivity services in WAN, cloud, private and cellular networks. All they need is access and a platform with APIs. A means to discover, reserve, design connectivity services on the operator's network will be necessary and the successful operators will understand that their network skillset might be useful for consumers and small / medium enterprises, but less so for large verticals, government and companies." Ericsson is keen to promote and sell the "plumbing" to enable this vision to MNOs, but will this be sufficient to fulfill the promise?

  • Network APIs are a possible first step to open up the connectivity to third parties willing to program it. Network APIs is notably absent from the report, maybe due to the fact that the company announced a second impairment charge of 1.1B$ (after a 2.9B$ initial write off) in less than a year on the 6.2B$ acquisition of Vonage.

  • Private networks are another highlighted trend in the report with a convincing example of an implementation with Northstar innovation program, in collaboration with Telia and Astazero. The implementation focuses on automotive applications, from autonomous vehicle, V2X connectivity, remote control... On paper, it delivers everything operators dream about when thinking of differentiated connectivity for verticals and industries. One has to wonder how much it costs and whether it is sustainable if most of the technology is provided by a single vendor.

  • Open RAN and Programmable networks is showcased in AT&T's deal that I have previously reported and commented. There is no doubt that single vendor automation, programmability and open RAN can be implemented at scale. The terms of the deal with AT&T seem to indicate that it is a great cost benefit for them. We will have to measure the benefits as the changes are being rolled out in the coming years.


Wednesday, July 3, 2024

June 2024 Open RAN requirements from Vodafone, Telefonica, Deutsche Telekom, Tim and Orange


 As is now customary, the "big 5" European operators behind open RAN release their updated requirements to the market, indicating to vendors where they should direct their roadmaps to have the most chances to be selected in these networks.

As per previous iterations, I find it useful to compare and contrast the unanimous and highest priority requirements as indications of market maturity and directions. Here is my read on this year's release:

Scenarios:

As per last year, the big 5 unanimously require support for O-RU and vDU/CU with open front haul interface on site for macro deployments. This indicates that although the desire is to move to a disaggregated implementation, with vDU / CU potentially moving to the edge or the cloud, all the operators are not fully ready for these scenario and prioritize first a deployment like for like of a traditional gnodeB with a disaggregated virtualized version, but all at the cell site. 

Moving to the high priority scenarios requested by a majority of operators, vDU/vCU in a remote site with O-RU on site makes its appearance, together for RAN sharing. Both MORAN and MOCN scenarios are desirable, the former with shared O-RU and dedicated vDU/vCU and the latter with shared O-RU, vDU and optionally vCU. In all cases, RAN sharing management interface is to be implemented to allow host and guest operators to manage their RAN resource independently.

Additional high priority requirements are the support for indoor and outdoor small cells. Indoor sharing O-RU and vDU/vCU in multi operator environments, outdoors with single operator with O-RU and vDU either co-located on site or fully integrated with Higher Layer Split. The last high priority requirement is for 2G /3G support, without indication of architecture.

Security:

The security requirements are sensibly the same as last year, freely adopting 3GPP requirements for Open RAN. The polemic around Open RAN's level of security compared to other cloud virtualized applications or traditional RAN architecture has been put to bed. Most realize that open interfaces inherently open more attack surfaces, but this is not specific to Open RAN, every cloud based architecture has the same drawback. Security by design goes a long way towards alleviating these concerns and proper no trust architecture can in many cases provide a higher security posture than legacy implementations. In this case, extensive use of IPSec, TLS 1.3, certificates at the interfaces and port levels for open front haul and management plane provide the necessary level of security, together with the mTLS interface between the RICs. The O-Cloud layer must support Linux security features, secure storage, encrypted secrets with external storage and management system.

CaaS:

As per last year, the cloud native infrastructure requirements have been refined, including Hardware Accelerator (GPU, eASIC) K8 support, Block and Object Storage for dedicated and hyper converged deployments, etc... Kubernetes infrastructure discovery, deployment, lifecycle management and cluster configuration has been further detailed. Power saving specific requirements have been added, at the Fan, CPU level with SMO driven policy and configuration and idle mode power down capabilities.

CU / DU:

CU DU interface requirements remain the same, basically the support for all open RAN interfaces (F1, HLS, X2, Xn, E1, E2, O1...). The support for both look aside and in-line accelerator architecture is also the highest priority, indicating that operators havent really reached a conclusion for a preferable architecture and are mandating both for flexibility's sake (In other words, inline acceleration hasn't convinced them that it can efficiently (cost and power) replace look aside). Fronthaul ports must support up to 200Gb by 12 x 10/25Gb combinations and mid haul up to 2 x 100Gb. Energy efficiency and consumption is to be reported for all hardware (servers, CPUs, fans, NIC cards...). Power consumption targets for D-RAN of 400Watts at 100% load for 4T4R and 500 watts for 64T64R are indicated. These targets seem optimistic and poorly indicative of current vendors capabilities in that space.

O-RU:

The radio situation is still messy and my statements from last year still mostly stand: "While all operators claim highest urgent priority for a variety of Radio Units with different form factors (2T2R, 2T4R, 4T4R, 8T8R, 32T32R, 64T64R) in a variety of bands (B1, B3, B7, B8, B20, B28B, B32B/B75B, B40, B78...) and with multi band requirements (B28B+B20+B8, B3+B1, B3+B1+B7), there is no unanimity on ANY of these. This leads vendors trying to find which configurations could satisfy enough volume to make the investments profitable in a quandary. There are hidden dependencies that are not spelled out in the requirements and this is where we see the limits of the TIP exercise. Operators cannot really at this stage select 2 or 3 new RU vendors for an open RAN deployment, which means that, in principle, they need vendors to support most, if not all of the bands and configurations they need to deploy in their respective network. Since each network is different, it is extremely difficult for a vendor to define the minimum product line up that is necessary to satisfy most of the demand. As a result, the projections for volume are low, which makes the vendors only focus on the most popular configurations. While everyone needs 4T4R or 32T32R in n78 band, having 5 vendor providing options for these configurations, with none delivering B40 or B32/B75 makes it impossible for operators to select a single vendor and for vendors to aggregate sufficient volume to create a profitable business case for open RAN." This year, there is one configuration of high priority that has unanimous support: 4T4R B3+B1. The other highest priority configurations requested by a majority of operators are 2T4R B28B+B20+B8, 4T4R B7, B3+B1, B32B+B75B, and 32T32R B78 with various power targets from 200 to 240W.

Open Front Haul:

The Front Haul interface requirements only acknowledge the introduction of Up Link enhancements for massive MIMO scenarios as they will be introduced to the 7.2.x specification, with a lower priority. This indicates that while Ericsson's proposed interface and architecture impact is being vetted, it is likely to become an optional implementation, left to the vendor's s choice until / unless credible cost / performance gains can be demonstrated.

Transport:

Optical budgets and scenarios are now introduced.

RAN features:

Final MoU positions are now proposed. Unanimous items introduced in this version revolve mostly around power consumption and efficiency counters, KPIs and mechanisms. other new requirements introduced follow 3GPP rel 16 and 17 on carrier aggregation, slicing and MIMO enhancements.

Hardware acceleration:

a new section introduced to clarify the requirements associated with L1 and L2 use of look aside and inline. The most salient requirement is for multi RAT 4G/5G simultaneous support.

Near RT RIC:

The Near Real Time RIC requirements continue to evolve and be refined. My perspective hasn't changed on the topic. and a detailed analysis can be found here. In short letting third party prescribe policies that will manipulate the DU's scheduler is anathema for most vendors in the space and, beyond the technical difficulties would go against their commercial interests. operators will have to push very hard with much commercial incentive to see xapps from 3rd party vendors being commercially deployed.

E2E use cases:

End-to-end use cases are being introduced to clarify the operators' priorities for deployments. There are many  but offer a good understanding of their priorities. Traffic steering for dynamic traffic load balancing, QoE and QoS based optimization, to optimize resource allocation based on a desired quality outcome... RAN sharing, Slice assurance, V2x, UAV, energy efficiency... this section is a laundry list of desiderata , all mostly high priority, showing here maybe that operators are getting a little unfocused on what real use cases they should focus on as an industry. As a result, it is likely that too many priorities result in no priority at all.

SMO

With over  260 requirements, SMO and non RT RIC is probably a section that is the most mature and shows a true commercial priority for the big 5 operators.

All in all, the document provides a good idea of the level of maturity of Open RAN for the the operators that have been supporting it the longest. The type of requirements, their prioritization provides a useful framework for vendors who know how to read them.

More in depth analysis of Open RAN and the main vendors in this space is available here.


Thursday, May 2, 2024

How to manage mobile video with Open RAN

Ever since the launch of 4G, video has been a thorny issue to manage for network operators. Most of them had rolled out unlimited or generous data plans, without understanding how video would affect their networks and economics. Most videos streamed to your phones use a technology called Adaptive Bit Rate (ABR), which is supposed to adapt the video’s definition (think SD, HD, 4K…) to the network conditions and your phone’s capabilities. While this implementation was supposed to provide more control in the way videos were streamed on the networks, in many cases it had a reverse effect.

 

The multiplication of streaming video services has led to ferocious competition on the commercial and technological front. While streaming services visibly compete on their pricing and content attractiveness, a more insidious technological battle has also taken place. The best way to describe it is to compare video to a gas. Video will take up as much capacity in the network as is available.

When you start a streaming app on your phone, it will assess the available bandwidth and try to deliver the highest definition video available. Smartphone vendors and streaming providers try to provide the best experience to their users, which in most cases means getting the highest bitrate available. When several users in the same cell try to stream video, they are all competing for the available bandwidth, which leads in many cases to a suboptimal experience, as some users monopolize most of the capacity while others are left with crumbs.

 

In recent years, technologies have emerged to mitigate this issue. Network slicing, for instance, when fully implemented could see dedicated slices for video streaming, which would theoretically guarantee that video streaming does not adversely impact other traffic (video conferencing, web browsing, etc…). However, it will not resolve the competition between streaming services in the same cell.

 

Open RAN offers another tool for efficiently resolving these issues. The RIC (RAN Intelligent Controller) provides for the first time the capability to visualize in near real time a cell’s congestion and to apply optimization techniques with a great level of granularity. Until Open RAN, the means of visualizing network congestion were limited in a multi-vendor environment and the means to alleviate them were broad and coarse. The RIC allows to create policies at the cell level, on a per connection basis. Algorithms allow traffic type inference and policies can be enacted to adapt the allocated bandwidth based on a variety of parameters such as signal strength, traffic type, congestion level, power consumption targets…

 

For instance, an operator or a private network for stadiums or entertainment venues could easily program their network to not allow upstream videos during a show, to protect broadcasting or intellectual property rights. This can be easily achieved by limiting the video uplink traffic while preserving voice, security and emergency traffic.

 

Another example would see a network actively dedicating deterministic capacity per connection during rush hour or based on threshold in a downtown core to guarantee that all users have access to video services with equally shared bandwidth and quality.

 

A last example could see first responder and emergency services get guaranteed high-quality access to video calls and broadcasts.

 

When properly integrated into a policy and service management framework for traffic slicing, Open RAN can be an efficient tool for adding fine grained traffic optimization rules, allowing a fairer apportioning of resource for all users, while preserving overall quality of experience.

 

Wednesday, March 27, 2024

State of Open RAN 2024: Executive Summary

 

The 2023 Open RAN market ended with a bang with AT&T awarding to Ericsson and Fujitsu a $14 billion deal to convert 70% of its traffic to run on Open RAN by end of 2026. 2024 started equally loud with the $13 billion acquisition of Juniper Networks from HPE on the thesis of the former company’s progress in telecoms AI and specifically in RAN intelligence with the launch of their RIC program.

2023 also saw the long-awaited launch of Drillish 1&1 in Germany, the first Open RAN greenfield in Europe, as well as the announcement from Vodafone that they will release a RAN RFQ that will see 30% of its 125,000 global sites dedicated to Open RAN.

Commercial deployments are now under way in western Europe, spurred by Huawei replacement mandates.

On the vendor’s front, Rakuten Symphony seems to have markedly failed to capitalize on Altiostar’s acquisition and convince brownfield network operators to purchase telecom gear from a fellow network operator. While Ericsson has announced its support for Open RAN with conditions, Samsung has been the vendor making the most progress with convincing market share growth across the geographies it covers. Mavenir has been steadily growing. A new generation of vendors have taken advantage of the Non-Real-Time RIC / SMO opportunity to enter the space. Non-traditional RAN vendors such as VMWare and Juniper Networks or SON vendors like Airhop have grown the most in that space, together with pure new entrants App players such as Rimedo Labs. With the acquisition of VMWare and Juniper Networks, both leaders in the RIC segment, 2024 could be live or die for this category, as the companies are reevaluating their priorities and aligning commercial interest with their acquirers.

On the technology side, the O-RAN alliance has continued its progress, publishing new releases while establishing bridgeheads with 3GPP and ETSI to facilitate the inclusion of Open RAN in the mainstream 5G advanced and 6G standards. The accelerator debate between inline and look aside architectures has died down, with the first layer 1 abstraction layers allowing vendors to effectively deploy on different silicon with minimal adjustment. Generative AI and large language models have captured the industry’s imagination and Nvidia has been capitalizing on the seemingly infinite appetite for specialized computing in cloud and telecom networks.

This report provides an exhaustive review of the key technology trends, vendors product offering, and strategies, ranging from silicon, servers, cloud CaaS, Open RUs, DU, CUs, RICs, apps and SMOs in the open RAN space in 2024.

Tuesday, March 19, 2024

Why are the US government and DoD in particular interested in Open RAN?

Over the last 24 months, it has been very interesting to see that the US Government has been moving from keen interest in Open RAN to make it policy for its procurement of connectivity technology.

As I am preparing to present for next week's RIC Forum, organized by NTIA and the US Department of Defense, many of my clients have been asking why the US Government seems so invested in Open RAN.

Supply chain diversification:

The first reason for this interest is the observation that the pool of network equipment provider has been growing increasingly shallow. The race from 3G to 4G to 5G has required vendors to attain a high level of industrialization and economy of scale, that has been achieved through many rounds of concentration. A limited supply chain with few vendors per category represents a strategic risk for the actor relying on this supply chain to operate economically. Open RAN allows the emergence of new vendors in specific categories that do not necessitate the industrial capacity to be delivering end to end RAN networks.

Cost effectiveness:

The lack of vendor choice has shifted negotiating power from network operators to vendors, which has negatively impacted margins and capacity to make changes. The emergence of new Open RAN vendors puts pressure on incumbents and traditional vendors to reduce their margins.

Geostrategic interest:

The growth of Huawei, ZTE and other Chinese vendors, with their suspected links to the Chinese government and Army, together with the somewhat obscure privacy and security laws there, has prompted the US government and many allies to ban or severely restraint the categories of Telecom Products that can be deployed in many telecom networks.

Furthermore, while US companies dominate traffic management, routing, data centers and hyperscalers space, the RAN, core network and general telco infrastructure remains dominated by European and Asian vendors. Open RAN has been an instrument to facilitate and accelerate Chinese vendors replacement, but also to stimulate the US vendors to emerge and grow.

DoD use case example: Spectrum Dominance

This area is less well understood and recognized but is an integral part of US Government generally and DoD's in particular interest in Open RAN. Private networks require connectivity products adapted for specific use cases, devices and geographies. Commercial macro networks offer "one size fits all" solution that are difficult and costly to adapt for that purpose. Essentially DoD runs hundreds of private networks, whether on its bases, its carriers or in ad hoc tactical environments. Being able to setup a secure, programmable, cost effective network, either permanently or ad hoc is an essential requirement, and can also become a differentiator or a force multiplier. A tactical unit deploying an ad hoc network might look at means not only to create a secure subnet, but also to establish spectrum dominance by manipulating waveforms and effectively interfering with adverse networks. This is one example where programmability at the RAN level can turn into an asset for battlefield dominance. There are many more use cases, but their classification might not enable us to publicly comment them. They illustrate though how technological dominance can extend to every aspect of telecom.

Open RAN in that respect provides programmability, cost effectiveness and modularity to create fit for purpose connectivity experiences in a multi vendor environment.


Monday, January 15, 2024

Gen AI and LLM: Edging the Latency Bet

The growth of generative AI and Large Language Models has restarted a fundamental question about the value of a millisecond of latency. When I was at Telefonica, and later, consulting at Bell Canada, one of the projects I was looking after was the development, business case, deployment, use cases and operation of Edge Computing infrastructure in a telecom network.

Since I have been developing and deploying Edge Computing platforms since 2016,  I have had a head start in figuring out the fundamental questions surrounding the technlogy, the business case and the commercial strategy.

Where is the edge?


The first question one has to tackle is where is the edge. It is an interesting question because it depends on your perspective. the edge is a different location if you are an hyperscaler, a telco network operator or a developer. It can also vary over time and geography. In any case, the edge is a place where one can position compute closer than the current Public or Private Cloud Infrastructure in order to derive additional benefits. It can vary from a regional, to a metro to a mini data center, all the way to on premise or on device cloud compute capability. Each has its distinct cost, limitation and benefit.

What are the benefits of Edge Computing?


The second question, or maybe the first one, from a pragmatic and commercial standpoint is why do we need edge computing? What are the benefits?

While these will vary depending on the consumer of the compute capability, and where the compute function is located, we can derive general benefits that will be indexed to the location. Among these, we can list data sovereignty, increased privacy and security and reduced latency, enabling cheaper (dumber) devices, the creation of new media types and new models and services.

What are the use cases of Edge Computing?

I have deployed and researched over 50 use cases of edge computing, from the banal storage, caching and streaming at the edge to the sophisticated TV production or the specialized Open RAN or telco User Plane Function or machine vision use cases for industrial and agriculture application.

What is the value of 1ms?

Sooner or later, after testing various use cases, locations and architectures, the fundamental question emerges. What is the value of 1ms? It is a question heavy with assumptions and correlations. In absolute, we would all like a connectivity that is faster, more resilient, more power efficient, economical and with lower latency. The factors that condition latency are the number of hops or devices the connection has to go through between the device and the origin point where the content or code is stored, transformed, computed and the distance between the device and the compute point. To radically reduce latency, you have to reduce the number of hops or reduce the distance, Edge Computing achieves both.

But obviously, there is a cost. The latency will be proportional to the distance,  so the fundamental question becomes what is the optimal placement of a compute resource, for which use case? Computing is a continuum and some applications and workload are not latency or privacy or sovereignty sensitive and can run on an indiscriminate public cloud, while others necessitate the compute to be in the same country, region or city. Others even require a closer proximity. The difference is staggering in terms of investments between a handful of centralized data centers and several hundreds / thousands? of micro data center.

What about AI and LLM?

Until now, these questions where somewhat theoretical and were answered organically by hyperscalers and operators based on their respective view of the market evolution. Generative AI and its extraordinary appetite for compute is rapidly changing this market space. Not only Gen AI accounts for a sizable and growing portion of all cloud compute capacity, the question of latency now is getting to the fore. Gen AI relies on Large Language Models that require large amount of storage and compute, to be able to to be trained to recognize patterns. The larger the LLM, the more compute capacity, the better the pattern recognition. Pattern recognition leads to generation of similar results based on incomplete prompts / question / data set, that is Gen AI. Where does latency come in? Part of the compute to generate a response to a question is in the inference business. While the data set resides in a large compute data center in a centralized cloud, inference is closer to the user, at the edge, where it parses the request and attempts to feed the trained model with unlabeled input to receive a prediction of the answer based on the trained model. The faster the inference is, the more responses the model can provide, which means that low latency, is a competitive advantage for a gen AI service.

As we have seen there is a relatively small number of options to reduce latency and they all involve large investment. The question then becomes: what s the value of a millisecond? Is 100 or 10 sufficient? When it comes to high frequency trading, 1ms is extremely valuable (billions of dollars). When it comes to online gaming, low latency is not as valuable as controlled and uniform latency across the players. When it comes to video streaming, latency is generally not an issue, but when it comes to machine vision for sorting fruits on a mechanical conveyor belt running at 10km/h, it is very important.

I have researched and deployed many edge computing use cases and derived a fairly comprehensive workshop on the technological, commercial and strategic aspects of Edge computing and Data Center investment strategies.

If you would like to know more, please get in touch.

Tuesday, January 9, 2024

HPE acquires Juniper Networks


On January 8, the first rumors started to emerge that HPE was entering final discussions to acquire Juniper Networks for $13b. By January 9th, HPE announced that they have entered into a definitive agreement for the acquisition.

Juniper Networks, known for its high-performance networking equipment, has been a significant player in the networking and telecommunications sector. It specializes in routers, switches, network management software, network security products, and software-defined networking technology. HPE, on the other hand, has a broad portfolio that includes servers, storage, networking, consulting, and support services.

 The acquisition of Juniper Networks by HPE could be a strategic move to strengthen HPE’s position in the networking and telecommunications sector, diversify its product offerings, and enhance its ability to compete with other major players in the market such as Cisco.

Most analysis I have read so far have pointed out AIOps and Mist AI as the core thesis for acquisition, enabling HPE to bridge the gap between equipment vendor and solution vendor, particularly in the Telco space.

While this is certainly an aspect of the value that Juniper Networks would provide to HPE, I believe that the latest progress from Juniper Networks in Telco beyond transport, particularly as an early leader in the emerging field of RAN Intelligence and SMO (Service Management and Orchestration) was a key catalyst in HPE's interest.

After all, Juniper Networks has been a networking specialist and leader for a long time, from SDN, SD WAN, Optical to data center, wired and wireless networks. While the company has been making great progress there, gradually virtualizing  and cloudifying its routers, firewalls and gateway functions, no revolutionary technology has emerged there until the application of Machine Learning and predictive algorithms to the planning, configuration, deployment and management of transport networks.

What is new as well, is Juniper Networks' efforts to penetrate the telco functions domains, beyond transport. The key area ready for disruption has been the Radio Access Network (RAN), specifically with Open RAN becoming an increasingly relevant industry trend to pervade networks architecture, culminating with AT&T's selection of Ericsson last month to deploy Open RAN for $14B.

Open RAN offers disaggregation of the RAN, with potential multivendor implementations, benefitting from open standard interfaces. Juniper Networks, not a traditional RAN vendor, has been quick to capitalize on its AIOps expertise by jumping on the RAN Intelligence marketspace, creating one of the most advanced RAN Intelligent Controller (RIC) in the market and aggressively integrating with as many reputable RAN vendors as possible. This endeavor, opening up the multi billion $ RAN and SMO markets is pure software and heavily biased towards AI/ML for automation and prediction.

HPE has been heavily investing in the telco space of late, becoming a preferred supplier of Telco CaaS and Cloud Native Functions (CNF) physical infrastructures. What HPE has not been able to do, is creating software or becoming a credible solutions provider / integrator. The acquisition of Juniper Networks could help solve this. Just like Broadcom's acquisition of VMWare (another early RAN Intelligence leader), or Cisco's acquisition of Accedian, hardware vendors yearn to go up the value chain by acquiring software and automation vendors, giving them the capacity to provide integrated end to end solutions and to achieve synergies and economy of scale through vertical integration.

The playbook is not new, but this potential acquisition could signal a consolidation trend in the telecommunications and networking industry, suggesting a more competitive landscape with fewer but larger players. This could have far-reaching implications for customers, suppliers, and competitors alike.


Monday, December 4, 2023

Is this the Open RAN tipping point: AT&T, Ericsson, Fujitsu, Nokia, Mavenir


The latest publications around Open RAN deliver a mixed bag of progress and skepticism. How to interpret these conflicting information?

A short retrospective of the most recent news:

On the surface, Open RAN seems to be benefiting from a strong momentum and delivering on its promise of disrupting traditional RAN with the introduction of new suppliers, together with the opening of traditional architecture to a more disaggregated and multi vendor model. The latest announcement from AT&T and Ericsson even would point out that the promise of reduced TCO for brownfield deployments is possible:
AT&T's yearly CAPEX guidance is supposed to reduce from a high of ~$24B to about 20B$ per year starting in 2024. If the 14B$ for 5 years spent on Ericsson RAN yield the announced 70% of traffic on Open RAN infrastructure, AT&T might have dramatically improved their RAN CAPEX with this deal.

What is driving these announcements?

For network operators, Open RAN has been about strategic supply chain diversification. The coalescence of the market into an oligopoly, and a duopoly after the exclusion of Chinese vendors to a large number of Western Networks has created an unfavorable negotiating position for the carriers. The business case of 5G relies heavily on declining costs or rather a change in the costs structure of deploying and operating networks. Open RAN is an element of it, together with edge computing and telco clouds.

For operators

The decision to move to Open RAN is mostly not any longer up for debate. While the large majority of brownfield networks will not completely transition to Open RAN they will introduce the technology, alongside the traditional architecture, to foster cloud native networks implementations. It is not a matter of if but a matter of when.
When varies for each market / operator. Operators do not roll out a new technology just because it makes sense even if the business case is favorable. A window of opportunity has to present itself to facilitate the introduction of the new technology. In the case of Open RAN, the windows can be:
  • Generational changes: 4G to 5G, NSA to SA, 5G to 6G
  • Network obsolescence: the RAN contracts are up for renewal, the infrastructure is aging or needs a refresh. 
  • New services: private networks, network slicing...
  • Internal strategy: transition to cloud native, personnel training, operating models refresh
  • Vendors weakness: Nothing better than an end of quarter / end of year big infrastructure bundle discount to secure and alleviate the risks of introducing new technologies

For traditional vendors

For traditional vendors, the innovator dilemma has been at play. Nokia has endorsed Open RAN early on, with little to show for it until recently, convincingly demonstrating multi vendor integration and live trials. Ericsson, as market leader has been slower to endorse Open RAN has so far adopted it selectively, for understandable reasons.

For emerging vendors

Emerging vendors have had mixed fortunes with Open RAN. The early market leader, Altiostar was absorbed by Rakuten which gave the market pause for ~3 years, while other vendors caught up. Mavenir, Samsung, Fujitsu and others offer credible products and services, with possible multi vendors permutations. 
Disruptors, emerging and traditional vendors are all battling in RAN intelligence and orchestration market segment, which promises to deliver additional Open RAN benefits (see link).


Open RAN still has many challenges to circumvent to become a solution that can be adopted in any network, but the latest momentum seem to show progress for the implementation of the technology at scale.
More details can be found through my workshops and advisory services.



Thursday, November 23, 2023

Announcing Private Networks 2024


Telecoms cellular networks, delivered by network operators, have traditionally been designed to provide coverage and best effort performance for consumers' general use. This design prioritizes high population density areas, emphasizing cost-effective delivery of coverage solutions with a network architecture treating all connections uniformly, effectively sharing available bandwidth. In some markets, net neutrality provisions further restrict the prioritization of devices, applications, or services over others.

Enterprises, governments, and organizations often turn to private networks due to two primary reasons. First, there may be no commercial network coverage in their operational areas. Second, even when commercial networks are present, they may fail to meet the performance requirements of these entities. Private networks offer a tailored solution, allowing organizations to have dedicated, secure, and high-performance connectivity, overcoming limitations posed by commercial networks.

Enterprise, industries, and government IT departments have developed a deep understanding of their unique connectivity requirements over the years. Recognizing the critical role that connectivity plays in their operations, these entities have sought solutions that align closely with their specific needs. Before the advent of 5G technology, Wi-Fi emerged as a rudimentary form of private networks, offering a more localized and controlled connectivity option compared to traditional cellular networks. However, there were certain limitations and challenges associated with Wi-Fi, and the costs of establishing and operating fully-fledged private networks were often prohibitive.

Enterprises, industries, and government organizations operate in diverse and complex environments, each with its own set of challenges and requirements. These entities understand that a one-size-fits-all approach to connectivity is often inadequate. Different sectors demand varied levels of performance, security, and reliability to support their specific applications and processes. This understanding has driven the search for connectivity solutions that can be tailored to meet the exacting standards of these organizations.

Wi-Fi technology emerged as an early solution that provided a degree of autonomy and control over connectivity. Enterprises and organizations adopted Wi-Fi to create local networks within their premises, enabling wireless connectivity for devices and facilitating communication within a confined area. Wi-Fi allowed for the segmentation of networks, offering a level of privacy and control that was not as pronounced in traditional cellular networks.

However, Wi-Fi also came with its limitations. Coverage areas were confined, and the performance could be affected by interference and congestion, especially in densely populated areas. Moreover, the security protocols of Wi-Fi, while evolving, were not initially designed to meet the stringent requirements of certain industries, such as finance, healthcare, or defense.

Establishing and operating private networks before the advent of 5G technology posed significant financial challenges. The infrastructure required for a dedicated private network, including base stations, networking equipment, and spectrum allocation, incurred substantial upfront costs. Maintenance and operational expenses added to the financial burden, making it cost-prohibitive for many enterprises and organizations to invest in private network infrastructure.

Moreover, the complexity of managing and maintaining a private network, along with the need for specialized expertise, further elevated the costs. These challenges made it difficult for organizations to justify the investment in a private network, especially when commercial networks, despite their limitations, were more readily available and appeared more economically feasible.

The arrival of 5G technology has acted as a game-changer in the landscape of private networks. 5G offers the potential for enhanced performance, ultra-low latency, and significantly increased capacity. These capabilities address many of the limitations that were associated with Wi-Fi and earlier generations of cellular networks. The promise of 5G has prompted enterprises, industries, and government entities to reassess the feasibility of private networks, considering the potential benefits in terms of performance, security, and customization.

The growing trend of private networks can be attributed to several key factors:

  • Performance Customization: Private networks enable enterprises and organizations to customize their network performance according to specific needs. Unlike commercial networks that provide best effort performance for a diverse consumer base, private networks allow for tailored configurations that meet the unique demands of various industries
  • Security and Reliability: Security is paramount for many enterprises and government entities. Private networks offer a higher level of security compared to public networks, reducing the risk of cyber threats and unauthorized access. Additionally, the reliability of private networks ensures uninterrupted operations critical for sectors like finance, healthcare, and defense.
  • Critical IoT and Industry 4.0 Requirements: The increasing adoption of Industrial IoT (IIoT) and Industry 4.0 technologies necessitates reliable and low-latency connectivity. Private networks provide the infrastructure required for seamless integration of IoT devices, automation, and real-time data analytics crucial for modern industrial processes.
  • Capacity and Bandwidth Management: In sectors with high data demands, such as smart manufacturing, logistics, and utilities, private networks offer superior capacity and bandwidth management. This ensures that enterprises can handle large volumes of data efficiently, supporting data-intensive applications without compromising on performance.
  • Flexibility in Deployment: Private networks offer flexibility in deployment, allowing organizations to establish networks in remote or challenging environments where commercial networks may not be feasible. This flexibility is particularly valuable for industries such as mining, agriculture, and construction.
  • Compliance and Control: Enterprises often operate in regulated environments, and private networks provide greater control over compliance with industry-specific regulations. Organizations can implement and enforce their own policies regarding data privacy, network access, and usage.
  • Edge Computing Integration: With the rise of edge computing, private networks seamlessly integrate with distributed computing resources, reducing latency and enhancing the performance of applications that require real-time processing. This is particularly advantageous for sectors like healthcare, where quick data analysis is critical for patient care.

As a result of these factors, the adoption of private networks is rapidly becoming a prominent industry trend. Organizations across various sectors recognize the value of tailored, secure, and high-performance connectivity that private networks offer, leading to an increasing shift away from traditional reliance on commercial cellular networks. This trend is expected to continue as technology advances and industries increasingly prioritize efficiency, security, and customized network solutions tailored to their specific operational requirements.

With the transformative potential of 5G, these entities are now reevaluating the role of private networks, anticipating that the advancements in technology will make these networks more accessible, cost-effective, and aligned with their specific operational requirements.

Terms and conditions available on demand: patrick.lopez@coreanalysis.ca  

Monday, November 13, 2023

RAN Intelligence leaders 2023


RAN intelligence is an emerging market segment composed of RAN Intelligent Controllers (RICs) and their associated Apps. I have been researching this field for the last two years and after an exhaustive analysis of the vendors and operators offerings and strategies, I am glad to publish here an extract of my findings. A complete review of the findings and rankings can be found through the associated report or workshop (commercial products).

The companies who participated in this study are AccelleRAN, AIRA, Airhop, Airspan, Cap Gemini, Cohere Technologies, Ericsson, Fujitsu, I-S Wireless, Juniper, Mavenir, Nokia, Northeastern. NTT Docomo, Parallel Wireless, Radisys, Rakuten Symphony, Rimedo Labs, Samsung, Viavi, VMWare.

They were separated in two overall categories:

  • Generalists: companies offering both RIC(s) and Apps implementations
  • Specialists: companies offering only Apps

The Generalist ranking is:



#1 Mavenir
#2 ex aequo Juniper and vmware
#4 Cap Gemini



The Specialists ranking is:



#1 Airhop
#2 Rimedo Labs
#3 Cohere Technologies



The study features a review of a variety of established and emerging vendors in the RAN space. RAN intelligence is composed of:

  • Non Real Time RIC - a platform for RIC intelligence necessitating more than 1 second to process and create feedback loops to the underlying infrastructure. This platform is an evolution of SON (Self Organizing Networks) systems, RAN EMS (Element Management Systems) and OSS (Operations Support Systems). The Non RT RIC is part of the larger SMO (Service Management and Orchestration) framework.
  • rApps -  Applications built on top of the Non RT RIC platform.
  • Near Real Time RIC - a platform for RIC intelligence necessitating less than 1 second to process and create feedback loops to the underlying infrastructure. This platform is a collection of capabilities today embedded within the RUs (Radio Units), DUs (Distributed Units) and CUs (Centralized Units).
  • xApps - Applications built on top of the Near RT RIC platform.
The vendors and operators were ranked on their strategy, vision and implementation across six dimensions, based on primary research from interviews, publicly available information, Plugfests participation and deployments observation:
  • Platform - the ability to create a platform and a collection of processes facilitating the developers' capability to create Apps that can be ported from one vendor to the other with minimum adaptation. Considerations were given to Apps lifecycle management, maturity of APIs / SDK, capability to create enabling apps / processes for hosted Apps.
  • Integrations / partnerships - one of the key tenets of Open RAN is the multi vendor or vendor agnostic implementation. From this perspective, companies that gave demonstrated their integration capabilities in multi vendor environments of the hosting of third party applications were ranked higher.
  • Non Real Time RIC - ranking the vision, implementation and maturity of the Non RT RIC capabilities.
  • Near Real Time RIC - ranking the vision, implementation and maturity of the Near RT RIC capabilities.
  • rApps - ranking the vision, implementation and maturity of the rApps offering
  • xApps - ranking the vision, implementation and maturity of the xApps offering

Tuesday, November 7, 2023

What's behind the operators' push for network APIs?

 


As I saw the latest announcements from GSMA, Telefonica and Deutsche Telekom, as well as the asset impairment from Ericsson on Vonage's acquisition, I was reminded of the call I was making three years ago for the creation of operators platforms.

One one hand, 21 large operators (namely, America Movil, AT&T, Axiata, Bharti Airtel, China Mobile, Deutsche Telekom, e& Group, KDDI, KT, Liberty Global, MTN, Orange, Singtel, Swisscom, STC, Telefónica, Telenor, Telstra, Telecom Italia (TIM), Verizon and Vodafone) within the GSMA launch an initiative to open their networks to developers with the launch of 8 "universal" APIs (SIM Swap, Quality on Demand, Device Status, Number Verification, Simple Edge Discovery, One Time Password SMS, Carrier Billing – Check Out and Device Location). 

Additionally, Deutsche Telekom was first to pull the trigger on the launch of their own gateway "MagentaBusiness API" based on Ericsson's depreciated asset. The 3 APIs launched are Quality-on-demand, Device Status – Roaming and Device Location, with more to come.

Telefonica, on their side launched shortly after DT their own Open Gateway offering with 9 APIs (Carrier Billing, Know your customer, Number verification, SIM Swap, QOD, Device status, Device location, QOD wifi and blockchain public address).

On the other hand, Ericsson wrote off 50% of the Vonage acquisition, while "creating a new market for exposing 5G capabilities through network APIs".

Dissonance much? why are operators launching network APIs in fanfare and one of the earliest, largest vendor in the field reporting asset depreciation while claiming a large market opportunity?

The move for telcos to exposing network APIs is not new and has had a few unsuccessful aborted tries (GSMA OneAPI in 2013, DT's MobiledgeX launch in 2019). The premises have varied over time, but the central tenet remains the same. Although operators have great experience in rolling out and operating networks, they essentially have been providing the same connectivity services to all consumers, enterprises and governmental organization without much variation. The growth in cloud networks is underpinned by new generations of digital services, ranging from social media, video streaming for consumers and cloud storage, computing, CPaaS and IT functions cloud migration for enterprises. Telcos have been mostly observers in this transition, with some timid tries to participate, but by and large, they have been quite unsuccessful in creating and rolling out innovative digital services. As Edge computing and Open RAN RIC become possibly the first applications forcing telcos to look at possible hyperscaler tie-ins with cloud providers, it raises several strategic questions.

Telcos have been using cloud fabric and porting their vertical, proprietary systems to cloud native environment for their own benefit. As this transition progresses, there is a realization that private networks growth are a reflection of enterprises' desire to create and manage their connectivity products themselves. While operators have been architecting and planning their networks for network slicing, hoping to sell managed connectivity services to enterprises, the latter have been effectively managing their connectivity, in the cloud and in private networks themselves without the telcos' assistance. This realization leads to an important decision: If enterprises want to manage their connectivity themselves and expand that control to 5G / Cellular, should Telcos let them and if yes, by what means?

The answer is in network APIs. Without giving third party access to the network itself, the best solution is to offer a set of controlled, limited, tools that allow to discover, reserve and consume network resources while the operator retains the overall control of the network itself. There are a few conditions for this to work. 

The first, is essentially the necessity for universal access. Enterprises and developers have gone though the learning curve of using AWS, Google cloud and Azure tools, APIs and semantic. They can conceivably see value in learning a new set with these Telco APIs, but wont likely go through the effort if each Telco has a different set in different country.

The second, and historically the hardest for telcos is to create and manage an ecosystem and developer community. They have tried many times and in different settings, but in many cases have failed, only enlisting friendly developers, in the form of their suppliers and would be suppliers, dedicating efforts to further their commercial opportunities. The jury is still out as to whether this latest foray will be successful in attracting independent developers.

The third, and possibly the most risky part in this equation, is which APIs would prove useful and whether the actual premise that enterprises and developers will want to use them is untested. Operators are betting that they can essentially create a telco cloud experience for developers more than 15 years after AWS launched, with less tools, less capacity to innovate, less cloud native skills and a pretty bad record in nurturing developers and enterprises.

Ericsson's impairment of Vonage probably acknowledges that the central premise that Telco APIs are desirable is unproven, that if it succeeds, operators will want to retain control and that there is less value in the platform than in the APIs themselves (the GSMA launch on an open source platform essentially directly depreciates the Vonage acquisition).

Another path exist, which provides less control (and commercial upside) for Telcos, where they would  host third party cloud functions in their networks, even allowing third party cloud infrastructure (such as Amazon Outpost for instance) to be collocated in their data centers. This option comes with the benefit of an existing ecosystem, toolset, services and clients, just extending the cloud to the telco network. The major drawback is that the telco accepts their role as utility provider of connectivity with little participation in the service value creation.

Both scenarios are being played out right now and both paths represent much uncertainty and risks for operators that do not want to recognize the strategic implications of their capabilities.