Showing posts with label cloud. Show all posts
Showing posts with label cloud. Show all posts

Wednesday, September 10, 2025

The 6G promise

As I attend TMForum Innovate Americas in Dallas, AI, automation and autonomous networks dominate the debates. I have long held the belief that the promise of 5G to deliver adapted connectivity to different organizations, industries, verticals and market segment was necessary for network operators to create sustainable differentiation. At Telefonica, nearly 10 years ago, I was positing that network slicing would only be useful if we were able to deliver hundreds or thousands or slices.

One of the key insights came from interactions with customers in the automotive, banking and manufacturing industries. The CIOs from these large organizations don’t want to be sold connectivity products. They don’t want the network operator to create and configure the connectivity experience. 

The CIOs from Mercedes, Ford, Magna know better what their connectivity needs are and what kind of slices would be useful than the network operators serving them. They don’t want to have to spend time educating their providers so that they can design a service for them. They don’t want to outsource the optimization of their connectivity to a third party who doesn’t understand their evolving needs. 

The growth in private networks implementations in healthcare, energy, mining, transportation and ports for instance, is a sign that there is demand in dedicated, customized connectivity products. It is also a sign that network operators have failed so far to build the slicing infrastructure and capacity to serve these use cases.

As a result, I proposed that network operators should focus on creating a platform for industries to discover, configure and consume connectivity services. This vision had a lot of prerequisites. Networks need to evolve and adopt network virtualization through separation of hardware and software, cloud native functions, centralized orchestration, stand-alone core, network slicing, the building of the platform and API exposure

A lot of progress has been made in all these categories, to the point that we see emerging the first dedicated slicing solutions for first responders, defense and industries. These slices are still mostly statically provisioned and managed by the network operators, but they will gradually grow. 

The largest issue for evolving from static to dynamic slicing and therefore moving from network operated to as a service user configurable is managing conflicts between the slices. Dedicating static capacity for each slice is inefficient and too cost prohibitive to implement at scale except for the largest governmental use cases. Dynamic slicing creation and management requires network observability, jointly with near real time capacity prediction, reservation, and attribution. 

This is where AI can provide the missing step to enable dynamic slicing for network as a service. If you can extract data from the user device, network telemetry and functions fast enough to be made available to algorithms for pattern identification in near real time, you can identify the device, user, industry, service and create the best fit connectivity, whether for a gaming console connected to a 4K TV in FWA, a business user on a video conference call, industrial collaborating robots assembling a vehicle, or a drone delivering a package.

All these use cases have different connectivity needs that are today either served by best effort undifferentiated connectivity or rigidly rule-based private networks. 

As 6G is starting to emerge, will it fulfil the 5G promises and deliver curated connectivity experiences?

Monday, March 10, 2025

MWC 25 thoughts

 Back from Mobile World Congress 2025!

I am so thankful I get to meet my friends, clients, ex colleagues year after year and to witness how our industry is moving first hand.

2025 was probably my 23rd congress or so and I always find it invaluable for many reasons. 



Innovation from the East

What stood up for me this year was how much innovation is coming from Asian companies, while most Western companies seem to be focusing on cost control. 

The feeling was pervasive throughout the show and the GLOMO awards winners showed Huawei, ZTE, China Mobile, SK, Singtel… investing in discovering and solving problems that many in Western markets dismiss as futuristic or outside their comfort zone. In mature markets, where price attrition is the rule, differentiation is key.

On a related topic, being Canadian, I can’t help thinking that many companies and regulators who looked at the banning of some Chinese vendors from their markets due to security preoccupations are now finding themselves in the situation to evaluate whether American suppliers do not also represent a risk in the future. 

Without delving into politics, I saw and heard many initiatives to enhance security, privacy, sovereignty, either in the cloud or the supply chain categories. 

Open telco APIs

Open APIs and the progress of telco networks APIs is encouraging, but while it is a good idea, it feels late and lacking in comparison with webscalers tooling and offering to discover, consume, and manage network functions on demand. Much work remains to be done in my opinion to enhance the aaS portion of the offering, particularly if slicing APIs are to be offered. 

Open RAN & RIC

Open RAN threat has successfully accelerated cloud and virtualized RAN adoption. Samsung started the trend and Ericsson’s deployment at AT&T has crystalized the mMIMo +CU+DU+non RT RIC from a main vendor and small cells + rApps from others as a viable option. Vodafone’s RAN refresh should see maybe more players into the mix as Mavenir and Nokia are struggling to gain meaningful market share. 

The Juniper / HPE acquisition drama, together with the Broadcom / VMware commercial strategy seem to have killed the idea of an independent Non RT RIC vendor. Near RT RIC, remains in my mind a flawed proposition as host of 3rd party xApps, and as an expensive gadget for anything else than narrow use cases. 

AI

AI of course, was the belle of the ball at MWC. Everyone had a twist, a demo, a model, an agent but few were able to demonstrate utility beyond automated time series regression as predictions or LLM based natural language processing as nauseam…

Some were convincingly starting to show Small Models that were tailored to their technology, topology and network with promising results. It is still early but it feels that this is where the opportunity lies. The creation and curation of a dataset that can be used to plan, manage, maintain, predict the state of one’s network, with bespoke algorithms seems more desirable than the wholesale vague large and poorly trained models. 

Telco Cloud and Edge computing is having a bit of a moment with AI and GPU aaS strategies being enacted.

All in all, many are trying to develop an AI strategy, and while we are still far from the AI-Native Telco Network, there is some progress and some interesting ventures amidst the noise.

Thursday, February 6, 2025

The AI-Native Telco Network VI: Storage


The AI-Native Telco Network I

The AI-Native Telco Network II

The AI-Native Telco Network III

The AI-Native Telco Network IV: Compute

The AI-Native Telco Network V: Network

As it turns out, a network that needs to run AI, either to self optimize or to offer wholesale AI related services needs some adjustments from a conventional telecom network. After looking at the compute and network functions, this post is looking at storage.

Storage has, for the longest time, been an afterthought in telecoms networks. Beyond the IT workloads and the management of data centers, storage needs were usually addressed embedded with the compute functions, sold by server vendors, or when necessary as direct attached storage appliances, usually OEMd or resold by the same vendors.

Today's networks see each network function, whether physical, virtualized or containerized coming with its own dedicated storage. The data generated by each function, whether telemetry, alarm, user, or control plane, logs or event is stored first locally, then a portion is exported to a data lake for cleaning and processing, then eventually a data warehouse, whether on a private or public cloud so that OSS, BSS and analytics functions can provide dashboards on the health, load, usage of the network and recommendations on optimizations.

The extraction, cleaning, and processing of these disparate datasets takes time, anywhere between 30 minutes to hours to accurately represent the network state.

One of the applications of AI/ML in telecoms networks is to optimize the networks reactively when there is an event or proactively when we can plan for a given change. This supposes that a feedback loop is built between the analytics layer and the operational layer, whereas a recommendation to change network parameters can be executed programmatically and automatically.

Speed becomes necessary, particularly to react to unpredicted events. Reducing reaction time if there is an element outage is crucial. This supposes that the state of the network must be observable in near real time, so that the AI/ML engines can detect patterns, anomalies and provide root cause analysis and remediation as fast as possible. The compute applied to these calculations, together with the speed of transmission have a direct effect on the speed, but not only.

Storage, as it turns out is also a crucial element of creating an AI-Native network. The large majority of AI/ML relies on storing data as object, whereas each data element is stored independently, in an unstructured manner, irrespective of size, but with an associated metadata file that describes the data element in details, allowing easy association and manipulation for AI/ML.

Why are traditional storage architectures not suitable for AI-Native Networks?

To facilitate the AI Native network, data element must be extracted from their network functions fast and transferred in a data repository that allows their manipulation at scale. It is easier said than done. Legacy systems have been built originally for block storage (databases and virtual machines, great for low latency, bad for high throughput). Objects are usually not natively supported and are in separate storage. Each vendor supports different protocols and interface, and each store is single tenant to its application.

Data needs to be shared and read by many network functions simultaneously, while they are being processed. Traditional architectures see data stored individually by network functions, then exported to larger databases, then amalgamated in data lakes for processing. The process is lengthy, error-prone and negates the capacity to act/react in real time.

The data sets are increasingly varied, between large and small objects, data streams and files, random and sequential read and write requirements. Legacy storage solutions require different systems for different use cases and data sets. This lengthens further the data amalgamation necessary for automation at scale.

Data needs to be properly labeled, without limitation of metadata, annotation and tags equally for billions of small objects (event records) or very large ones (video files). Traditional storage solutions are designed either for small or large objects and struggle to accommodate both in the same architecture. They also have limitations in the amount of metadata per object. This increases cost and time to insight while reducing their capacity to evolve.

Datasets are live structures. They often exist in different formats and versions for different users. Traditional architectures are not able to handle multiple formats simultaneously, and versions of the same datasets require separate storage elements. This leads to data inconsistencies, corruption and divergence of insight.

Performance is key in AI systems, and it is multidimensional. Storage solutions need to be able to accommodate simultaneously high throughput, scale out capacity and low latency. Traditional storage systems are built for capacity but not designed for high throughput and low latency, which reduces dramatically the performance of data pipelines.

Hybrid and multi cloud become a key requirement for AI, as data needs to be exposed to access, transport, core, OSS/ BSS domains in the edge, the private cloud and the public cloud simultaneously. Traditional storage solutions necessitate adaptation, translation, duplication, and migration to be able to function across cloud boundaries, which significantly increase their cost, while reducing their performance and capabilities.

As we have seen, the data storage architecture for a telecom network becomes a strategic infrastructure decision and the traditional storage solutions cannot accommodate AI and network automation at scale.

Storage Requirements for AI-Native Networks

Perhaps the most important attribute for AI project storage is agility—the ability to grow from a few hundred gigabytes to petabytes, to perform well with rapidly changing mixed workloads, to serve data to training and production clients simultaneously throughout a project’s life, and to support the data models used by project tools.

The attributes of an ideal AI storage solution are: 

Performance Agility

          I/O performance that scales with capacity.

          Rapid manipulation of billions of items, e.g., for randomization during training.

Capacity Flexibility

          Wide range (100s of gigabytes to petabytes) .

          High performance with billions of data items.

          Range of cost points optimized for both active and seldom accessed data.

Availability & Data Durability

          Continuous operation over decade-long project lifetimes.

          Protection of data against loss due to hardware, software, and operational faults.

          Non-disruptive hardware and software upgrade and replacement.

          Seamless data sharing by development, training, and production.

Space and Power Efficiency

          Low space and power requirements that free data center resources for power-hungry computation.

Security

          Strong administrative authentication.

          “Data at rest” encryption.

          Protection against malware (especially ransomware) attacks.

Operational Simplicity

          Non-disruptive modernization for continuous long-term productivity.

          Support for AI projects’ most-used interconnects and protocols.

          Autonomous configuration (e.g. device groups, data placement, protection, etc.).

          Self-tuning to adjust to rapidly changing mixed random/ sequential I/O loads.

Hybrid and Multi Cloud Natively

          Data agility to cross cloud boundaries

          Centralized data lifecycle management

          Decide which data set is stored and processed where

          From edge for inference to private cloud for optimization and automation to public cloud for model training and replication.

Traditional "spinning disk" based storage have not been designed for AI/ML workloads. They lack the performance, agility, cost effectiveness, latency, power consumptions attributes necessary to enable AI networks at scale. Modern storage infrastructure, designed for high performance computing rely on Flash storage, an efficient, cost effective, low power, high performance technology that enables compute and network elements to perform at line rate for AI workloads.

Tuesday, January 28, 2025

The AI-Native Telco Network V: Network


The AI-Native Telco Network I

The AI-Native Telco Network II

The AI-Native Telco Network III

The AI-Native Telco Network IV: Compute

As we have seen in previous posts, AI and the journey to autonomous networks forces telco operators to look at their network architecture and reevaluate whether their infrastructure is fit for this purpose. In many cases, the first reflex for them is to deploy new servers and GPUs in AI dedicated pods and to find out that processing power itself is not enough for a high performance AI system. The network connectivity needs to be accelerated as well.

SmartNICs

While dedicated routing and packet processing are necessary, one way to increase performance of an AI pod is to deploy accelerators in the shape of Smart Network Interface Cards (SmartNICs).

SmartNICs are specialized network cards designed to offload certain networking tasks from the CPU and provide additional processing power at the network edge. Unlike traditional NICs, which merely serve as communication devices, SmartNICs come equipped with onboard processing capabilities such as CPUs, ASICs, FPGAs or programmable processors. These capabilities allow SmartNICs to handle packet processing, traffic management, and other networking tasks, without burdening the CPU.

While they are certainly hybrid compute / network dedicated silicon, they accelerate overall performance by offloading packet processing, user plane functions, load balancing, etc. from the CPUs and GPUs that can be freed up for pure AI workload processing.

For telecom providers, SmartNICs offer a way to improve network efficiency while simultaneously boosting the ability to handle AI workloads in real-time.

High-Speed Ethernet

One of the most straightforward ways to increase network speed is by adopting higher bandwidth Ethernet standards. Traditional networks may rely on 10GbE or 25GbE, but AI workloads benefit from faster connections, such as 100GbE or even 400GbE, which provide higher throughput and lower latency.

AI models, especially large deep learning models, require massive data transfer between nodes. Upgrading to 100GbE or 400GbE can drastically improve the speed at which data is exchanged between GPUs, CPUs, and storage systems in an AI pod, reducing the time required to train models and increasing throughput.

AI models often need to pull vast amounts of training data from storage. Higher-speed Ethernet allows AI pods to access data more quickly, decreasing bottlenecks in I/O.

Use Low-Latency Networking Protocols

Adopting advanced networking protocols such as InfiniBand or RoCE (RDMA over Converged Ethernet) is essential to reduce latency in AI pods. These protocols are designed to enable faster communication between nodes by bypassing traditional network stacks and reducing the overhead that can slow down AI workloads.

InfiniBand and RoCE provide extremely low-latency communication between AI pods, which is crucial for high-performance AI training and inference.
These protocols support higher bandwidths (up to 200Gbps or more) and provide more efficient communication channels, ideal for high-throughput AI workloads like distributed deep learning.

To increase AI performance, telecom operators need to focus on upgrading their network infrastructure to support the growing demands of AI workloads. By implementing strategies such as high-speed Ethernet, SmartNICs, and specialized AI interconnects, operators can enhance the speed, scalability, and efficiency of their AI pods. This enables faster processing of large datasets, reduced latency, and improved overall performance for AI training and inference, allowing telecom operators to stay ahead in the competitive AI-driven landscape.
Storage, we will see in the next post, plays also an integral part in AI performance on a telecom network.

Thursday, January 23, 2025

The AI-Native Telco Network IV: Compute

The AI-Native Telco Network I

 The AI-Native Telco Network II

 The AI-Native Telco Network III

As we have seen in previous posts, to accommodate and make use of AI at scale, a network must be tuned and architected for this purpose. While any telco network can deploy AI in discrete environments or throughout its fabric, the difference between a Data strategy and an AI strategy is speed + feedback loop.

Most Data collected in a telco network has been used for very limited purpose. Mainly archiving for forensics to determine the root cause of an anomaly or outage, charging and customer management functions or for legal interception or regulatory requirements. For these use cases, Data needs to be properly formatted and laid to rest until analytics engines can provide a representation of the state of the network or an account. Speed is not an issue here, the system can suffer minutes or hour delays before a coherent picture is formed and represented.

AI altogether can provide better insight through larger datasets than classical analytics. It provides better capacity to correlate events and to predict the evolution of the network state. It can also propose optimization, enhancements, mitigation recommendations, but to be truly effective, it needs to be able to have feedback loop to the network functions, so that these recommendations can be turned into actions and automated.


Herein lies the trick. If you want to run AI in your network, so that you can automate it, allowing it to reactively or proactively auto scale, heal, optimize its performance, power consumption, cost, etc... at scale, it cannot be done manually. Automation is necessary throughout. Speed from event, anomaly, pattern, insight detection to action becomes key.

As we have seen, speed is the product of high performance, low latency in the production, extraction, storage, and processing of data to create actionable insights that can be automated. At the fabric layer, compute, connectivity and storage are the elements that need to be properly designed to enable the speed to run AI.

In this post, we will look at the compute function. Processing, analyzing, manipulating Data requires computing capabilities. There are different architectures of computing units for different purposes.

  • The CPU (Central Processing Units) are general purpose computing, suitable for serial tasks. Multiple CPU Cores can work in parallel to enhance performance. Suitable for most telecoms functions, except real time processing. Generic CPUs are used in most telco data centers and clouds for most telco functions, from OSS, BSS to Core and transport. At the edge and the RAN, CPUs are used for Centralized Unit functions.
  • ASICs (Application Specific Integrated Circuits) are CPUs that have been designed for specific tasks or applications. They are not as versatile as other processing units but deliver the absolute highest performance in smallest footprint for specific applications. They can be found in first generation Open RAN servers to run Distributed Unit functions, as well as in specialized packet routing and packet switching (more on that in the connectivity post).
  • FPGA (Field Programmable Gate Arrays) are CPUs that can be programmed to adapt to specific workloads without necessitating complete redesign. They provide a good balance between adaptability and performance and are suitable for cryptographic and rapid data processing. They are used in telco networks in security gateways, as well as advanced routing and packet processing functions.
  • GPUs (Graphics Processing Units) feature large numbers of smaller cores, coupled with high memory bandwidth making them suitable for graphics processing and large number of parallel matrix calculations. In telco network, GPUs are starting to be introduced for AI / ML workloads in data centers and clouds (neural networks and model training), as well as in the RAN for the Distributed Unit and RAN Intelligent Controller.
  • TPUs (Tensor Processing Units) are Google's specialized processing units optimized for Tensor processing of ML and deep learning model training and inference. They are not yet used in Telco environments but can be used on Google Cloud in a hybrid scenario.
  • NPUs (Neural Processing Units) are designed for Neural Networks for deep learning processing. They are very suitable for inference tasks as their power consumption and footprint are very small. They start to appear in telco networks at the edge, and in devices.

Artificial Intelligence, Machine Learning can run on any of the above computing platform. The difference is the performance, footprint, cost and power consumption profile. We have seen lately the emergence of GPUs as the new processing unit poised to replace CPUs, ASICs and FPGAs in specialized traffic functions, using the RAN and AI as its beachhead. GPUs are key in running AI workloads at scale , delivering the performance in terms of low latency and high throughput necessary for rapid time to insight.

Their cost and power consumption forces network operators to find the right balance between the number of GPUs and their placement throughout the network, to enable both high processing power necessary for model training, in the private cloud, together with low latency for rapid inferencing and automation at the edge. While this architecture might provide the best basis for an automated or autonomous network, its cost and the rapid rate of change in GPU generations might give most a pause.

The main challenge becomes the selection of compute architecture that can provide the most capacity, speed, while remaining cost effective to procure and run. For this reason, many telco operators have decided to centralize in a first step their GPU farms, to fine tune their use cases, with limited decentralized deployments. Another avenue for exploration is the wholesaling of the compute capacity to reduce internal costs. We have seen a few GPUaaS and AIaaS initiatives recently announced.

In any cases, most operators who have deployed high capacity AI pods with GPUs, find that the performance of the overall system requires further refinement and look at connectivity as the next step in their AI-Native network journey. That will be the theme of our next post.

Monday, December 16, 2024

The AI-Native Telco Network II

I have been working on telco networks big Data, Machine Learning, Deep Learning and AI for the last 8 years or so. Between Interpretative AI, Predictive AI and Generative AI, we have seen much progress lately, but I think a lot of the discussions about using general Large Language Models for telco networks is not applicable.

Much of the datasets in Telcos, like in government and defense, is proprietary. It is not shared outside the organization and wouldn't suffer "contamination" from external sources unless under very specific conditions, for very limited subsets.


As a result, a large part of cloud-based, public LLMs are just noise as far as telcos are concerned. The largest opportunity is in proprietary, smaller models, where the algorithmics can be somewhat outsourced but the storage, processing, training of the model are in house. This type of sovereign or proprietary AI can better account for the specificity of a network and its users than larger models trained on generic data.


The problem many encounter is that the operators don't necessarily have all the data literacy or resource necessary to develop the algorithms or even to format the dataset properly, while specialized vendors might have the AI/ML domain expertise but cannot train the models on real data, since they are proprietary and stay on-network.


The result is telcos first focusing on the architecture and infrastructure of the data network and pipeline, the formatting and scrubbing of the dataset, the storage, processing and transmission of the data between on premise, private and the interaction with hybrid / public cloud instances.

Vendors are proposing a variety of solutions with promises of savings, new revenues and new services, but in many cases, they are based on models running on synthetic data and no one knows what the result will be until tested with the real dataset, tuned and remodeled.

Training models on synthetic data might be necessary for vendors but it's a bit like training for football in the hope to play rugby. Sure. some skills are transferable, but even a world class football player won't make it to professional rugby.

This is where the opportunity lies for operators. Recruit, train telco professionals to be data literate, so that they can understand how vendors should produce datasets and how to exploit them. This is not a spectator sport where you can just buy solutions off the shelf and let your vendors manage them for you.



Thursday, August 8, 2024

The journey to automated and autonomous networks

 

The TM Forum has been instrumental in defining the journey towards automation and autonomous telco networks. 

As telco revenues from consumers continue to decline and the 5G promise to create connectivity products that enterprises, governments and large organizations will be able to discover, program and consume remains elusive, telecom operators are under tremendous pressure to maintain profitability.

The network evolution started with Software Defined Networks, Network Functions Virtualization and more recently Cloud Native evolution aims to deliver network programmability for the creation of innovative on-demand connectivity services. Many of these services require deterministic connectivity parameters in terms of availability, bandwidth, latency, which necessitate end to end cloud native fabric and separation of control and data plane. A centralized control of the cloud native functions allow to abstract resource and allocate them on demand as topology and demand evolve.

A benefit of a cloud native network is that, as software becomes more open and standardized in a multi vendor environment, many tasks that were either manual or relied on proprietary interfaces can now be automated at scale. As layers of software expose interfaces and APIs that can be discovered and managed by sophisticated orchestration systems, the network can evolve from manual, to assisted, to automated, to autonomous functions.


TM Forum defines 5 evolution stages from full manual operation to full autonomous networks.

  • Condition 0 - Manual operation and maintenance: The system delivers assisted monitoring capabilities, but all dynamic tasks must be 0 executed manually
  • Step 1 - Assisted operations and maintenance: The system executes a specific, repetitive subtask based on pre-configuration, which can be recorded online and traced, in order to increase execution efficiency.
  • Step 2: - Partial autonomous network: The system enables closed-loop operations and maintenance for specific units under certain external environments via statically configured rules.
  • Step 3 - Conditional autonomous network: The system senses real-time environmental changes and in certain network domains will optimize and adjust itself to the external environment to enable, closed-loop management via dynamically programmable policies.
  • Step 4 - Highly autonomous network: In a more complicated cross-domain environment, the system enables decision-making based on predictive analysis or active closed-loop management of service-driven and customer experience-driven networks via AI modeling and continuous learning.
  • Step 5 - Fully autonomous network: The system has closed-loop automation capabilities across multiple services, multiple domains (including partners’ domains) and the entire lifecycle via cognitive self-adaptation.
After describing the framework and conditions for the first 3 steps, the TM Forum has recently published a white paper describing the Level 4 industry blueprints.

The stated goals of level 4 are to enable the creation and roll out of new services within 1 week with deterministic SLAs and the delivery of Network as a service. Furthermore, this level should allow fewer personnel to manage the network (1000's of person-year) while reducing energy consumption and improving service availability.

These are certainly very ambitious objectives. The paper goes on to describe "high value scenarios" to guide level 4 development. This is where we start to see cognitive dissonance creeping in between the stated objectives and the methodology.  After all, much of what is described here exists today in cloud and enterprise environments and I wonder whether Telco is once again reinventing the wheel in trying to adapt / modify existing concepts and technologies that are already successful in other environments.

First, the creation of deterministic connectivity is not (only) the product of automation. Telco networks, in particular mobile networks are composed of a daisy chain of network elements that see customer traffic, signaling, data repository, look up, authentication, authorization, accounting, policy management functions being coordinated. On the mobile front, the signal effectiveness varies over time, as weather, power, demand, interferences, devices... impact the effective transmission. Furthermore, the load on the base station, the backhaul, the core network and the  internet peering point also vary over time and have an impact on its overall capacity. As you understand, creating a connectivity product with deterministic speed, latency capacity to enact Network as a Service requires a systemic approach. In a multi vendor environment, the RAN, the transport, the core must be virtualized, relying on solid fiber connectivity as much as possible to enable the capacity and speed. The low latency requires multiple computing points, all the way to the edge or on premise. The deterministic performance requires not only virtualization and orchestration of the RAN, but also the PON fiber and end to end slicing support and orchestration. This is something that I led at Telefonica with an open compute edge computing platform, a virtualized (XGS) PON on a ONF ONOS VOLTHA architecture with an open virtualized RAN. This was not automated yet, as most of these elements were advanced prototype at that stage, but the automation is the "easy" part once you have assembled the elements and operated them manually for enough time. The point here is that deterministic network performances is attainable but still a far objective for most operators and it is a necessary condition to enact NaaS, before even automation and autonomous networks.

Second, the high value scenarios described in the paper are all network-related. Ranging from network troubleshooting, to optimization and service assurance, these are all worthy objectives, but still do not feel "high value" in terms of creation of new services. While it is natural that automation first focuses on cost reduction for roll out, operation, maintenance, healing of network, one would have expected more ambitious "new services" description.

All in all, the vision is ambitious, but there is still much work to do in fleshing out the details and linking the promised benefits to concrete services beyond network optimization.

Wednesday, July 3, 2024

June 2024 Open RAN requirements from Vodafone, Telefonica, Deutsche Telekom, Tim and Orange


 As is now customary, the "big 5" European operators behind open RAN release their updated requirements to the market, indicating to vendors where they should direct their roadmaps to have the most chances to be selected in these networks.

As per previous iterations, I find it useful to compare and contrast the unanimous and highest priority requirements as indications of market maturity and directions. Here is my read on this year's release:

Scenarios:

As per last year, the big 5 unanimously require support for O-RU and vDU/CU with open front haul interface on site for macro deployments. This indicates that although the desire is to move to a disaggregated implementation, with vDU / CU potentially moving to the edge or the cloud, all the operators are not fully ready for these scenario and prioritize first a deployment like for like of a traditional gnodeB with a disaggregated virtualized version, but all at the cell site. 

Moving to the high priority scenarios requested by a majority of operators, vDU/vCU in a remote site with O-RU on site makes its appearance, together for RAN sharing. Both MORAN and MOCN scenarios are desirable, the former with shared O-RU and dedicated vDU/vCU and the latter with shared O-RU, vDU and optionally vCU. In all cases, RAN sharing management interface is to be implemented to allow host and guest operators to manage their RAN resource independently.

Additional high priority requirements are the support for indoor and outdoor small cells. Indoor sharing O-RU and vDU/vCU in multi operator environments, outdoors with single operator with O-RU and vDU either co-located on site or fully integrated with Higher Layer Split. The last high priority requirement is for 2G /3G support, without indication of architecture.

Security:

The security requirements are sensibly the same as last year, freely adopting 3GPP requirements for Open RAN. The polemic around Open RAN's level of security compared to other cloud virtualized applications or traditional RAN architecture has been put to bed. Most realize that open interfaces inherently open more attack surfaces, but this is not specific to Open RAN, every cloud based architecture has the same drawback. Security by design goes a long way towards alleviating these concerns and proper no trust architecture can in many cases provide a higher security posture than legacy implementations. In this case, extensive use of IPSec, TLS 1.3, certificates at the interfaces and port levels for open front haul and management plane provide the necessary level of security, together with the mTLS interface between the RICs. The O-Cloud layer must support Linux security features, secure storage, encrypted secrets with external storage and management system.

CaaS:

As per last year, the cloud native infrastructure requirements have been refined, including Hardware Accelerator (GPU, eASIC) K8 support, Block and Object Storage for dedicated and hyper converged deployments, etc... Kubernetes infrastructure discovery, deployment, lifecycle management and cluster configuration has been further detailed. Power saving specific requirements have been added, at the Fan, CPU level with SMO driven policy and configuration and idle mode power down capabilities.

CU / DU:

CU DU interface requirements remain the same, basically the support for all open RAN interfaces (F1, HLS, X2, Xn, E1, E2, O1...). The support for both look aside and in-line accelerator architecture is also the highest priority, indicating that operators havent really reached a conclusion for a preferable architecture and are mandating both for flexibility's sake (In other words, inline acceleration hasn't convinced them that it can efficiently (cost and power) replace look aside). Fronthaul ports must support up to 200Gb by 12 x 10/25Gb combinations and mid haul up to 2 x 100Gb. Energy efficiency and consumption is to be reported for all hardware (servers, CPUs, fans, NIC cards...). Power consumption targets for D-RAN of 400Watts at 100% load for 4T4R and 500 watts for 64T64R are indicated. These targets seem optimistic and poorly indicative of current vendors capabilities in that space.

O-RU:

The radio situation is still messy and my statements from last year still mostly stand: "While all operators claim highest urgent priority for a variety of Radio Units with different form factors (2T2R, 2T4R, 4T4R, 8T8R, 32T32R, 64T64R) in a variety of bands (B1, B3, B7, B8, B20, B28B, B32B/B75B, B40, B78...) and with multi band requirements (B28B+B20+B8, B3+B1, B3+B1+B7), there is no unanimity on ANY of these. This leads vendors trying to find which configurations could satisfy enough volume to make the investments profitable in a quandary. There are hidden dependencies that are not spelled out in the requirements and this is where we see the limits of the TIP exercise. Operators cannot really at this stage select 2 or 3 new RU vendors for an open RAN deployment, which means that, in principle, they need vendors to support most, if not all of the bands and configurations they need to deploy in their respective network. Since each network is different, it is extremely difficult for a vendor to define the minimum product line up that is necessary to satisfy most of the demand. As a result, the projections for volume are low, which makes the vendors only focus on the most popular configurations. While everyone needs 4T4R or 32T32R in n78 band, having 5 vendor providing options for these configurations, with none delivering B40 or B32/B75 makes it impossible for operators to select a single vendor and for vendors to aggregate sufficient volume to create a profitable business case for open RAN." This year, there is one configuration of high priority that has unanimous support: 4T4R B3+B1. The other highest priority configurations requested by a majority of operators are 2T4R B28B+B20+B8, 4T4R B7, B3+B1, B32B+B75B, and 32T32R B78 with various power targets from 200 to 240W.

Open Front Haul:

The Front Haul interface requirements only acknowledge the introduction of Up Link enhancements for massive MIMO scenarios as they will be introduced to the 7.2.x specification, with a lower priority. This indicates that while Ericsson's proposed interface and architecture impact is being vetted, it is likely to become an optional implementation, left to the vendor's s choice until / unless credible cost / performance gains can be demonstrated.

Transport:

Optical budgets and scenarios are now introduced.

RAN features:

Final MoU positions are now proposed. Unanimous items introduced in this version revolve mostly around power consumption and efficiency counters, KPIs and mechanisms. other new requirements introduced follow 3GPP rel 16 and 17 on carrier aggregation, slicing and MIMO enhancements.

Hardware acceleration:

a new section introduced to clarify the requirements associated with L1 and L2 use of look aside and inline. The most salient requirement is for multi RAT 4G/5G simultaneous support.

Near RT RIC:

The Near Real Time RIC requirements continue to evolve and be refined. My perspective hasn't changed on the topic. and a detailed analysis can be found here. In short letting third party prescribe policies that will manipulate the DU's scheduler is anathema for most vendors in the space and, beyond the technical difficulties would go against their commercial interests. operators will have to push very hard with much commercial incentive to see xapps from 3rd party vendors being commercially deployed.

E2E use cases:

End-to-end use cases are being introduced to clarify the operators' priorities for deployments. There are many  but offer a good understanding of their priorities. Traffic steering for dynamic traffic load balancing, QoE and QoS based optimization, to optimize resource allocation based on a desired quality outcome... RAN sharing, Slice assurance, V2x, UAV, energy efficiency... this section is a laundry list of desiderata , all mostly high priority, showing here maybe that operators are getting a little unfocused on what real use cases they should focus on as an industry. As a result, it is likely that too many priorities result in no priority at all.

SMO

With over  260 requirements, SMO and non RT RIC is probably a section that is the most mature and shows a true commercial priority for the big 5 operators.

All in all, the document provides a good idea of the level of maturity of Open RAN for the the operators that have been supporting it the longest. The type of requirements, their prioritization provides a useful framework for vendors who know how to read them.

More in depth analysis of Open RAN and the main vendors in this space is available here.


Thursday, June 20, 2024

Telco grade or cloud grade ? II

I have oftentimes criticized network operators’ naivety when it comes to their capacity to convince members of the ecosystem to adopt their telco idiosyncrasies.

Wednesday, March 27, 2024

State of Open RAN 2024: Executive Summary

 

The 2023 Open RAN market ended with a bang with AT&T awarding to Ericsson and Fujitsu a $14 billion deal to convert 70% of its traffic to run on Open RAN by end of 2026. 2024 started equally loud with the $13 billion acquisition of Juniper Networks from HPE on the thesis of the former company’s progress in telecoms AI and specifically in RAN intelligence with the launch of their RIC program.

2023 also saw the long-awaited launch of Drillish 1&1 in Germany, the first Open RAN greenfield in Europe, as well as the announcement from Vodafone that they will release a RAN RFQ that will see 30% of its 125,000 global sites dedicated to Open RAN.

Commercial deployments are now under way in western Europe, spurred by Huawei replacement mandates.

On the vendor’s front, Rakuten Symphony seems to have markedly failed to capitalize on Altiostar’s acquisition and convince brownfield network operators to purchase telecom gear from a fellow network operator. While Ericsson has announced its support for Open RAN with conditions, Samsung has been the vendor making the most progress with convincing market share growth across the geographies it covers. Mavenir has been steadily growing. A new generation of vendors have taken advantage of the Non-Real-Time RIC / SMO opportunity to enter the space. Non-traditional RAN vendors such as VMWare and Juniper Networks or SON vendors like Airhop have grown the most in that space, together with pure new entrants App players such as Rimedo Labs. With the acquisition of VMWare and Juniper Networks, both leaders in the RIC segment, 2024 could be live or die for this category, as the companies are reevaluating their priorities and aligning commercial interest with their acquirers.

On the technology side, the O-RAN alliance has continued its progress, publishing new releases while establishing bridgeheads with 3GPP and ETSI to facilitate the inclusion of Open RAN in the mainstream 5G advanced and 6G standards. The accelerator debate between inline and look aside architectures has died down, with the first layer 1 abstraction layers allowing vendors to effectively deploy on different silicon with minimal adjustment. Generative AI and large language models have captured the industry’s imagination and Nvidia has been capitalizing on the seemingly infinite appetite for specialized computing in cloud and telecom networks.

This report provides an exhaustive review of the key technology trends, vendors product offering, and strategies, ranging from silicon, servers, cloud CaaS, Open RUs, DU, CUs, RICs, apps and SMOs in the open RAN space in 2024.

Wednesday, January 31, 2024

The AI-Native Telco Network

AI, and more particularly generative AI has been a big buzzword since the public launch of GTP. The promises of AI to automate and operate complex tasks and systems are pervading every industry and telecom is not impervious to it. 

Most telecom equipment vendors have started incorporating AI or brushed up their big data / analytics skills at least in their marketing positioning. 
We have even seen a few market acquisitions where AI / automation has been an important part of the investment narrative / thesis (HPE / Juniper Networks)
Concurrently, many startups are being founded or are pivoting towards AI /ML to take advantage of this investment cycle. 

In telecoms, there has been use for big data, machine learning, deep learning and other similar methods for a long time. I was leading such a project at Telefonica on 2016, using advanced prediction algorithms to detect alarming patterns, infer root cause analysis and suggest automated resolutions. 

While generative AI is somewhat new, the use of data to analyze, represent, predict network conditions is well known. 

AI in telecoms is starting to show some promises, particularly when it comes to network planning, operation, spectrum optimization, traffic prediction, and power efficiency. It comes with a lot of preconditions that are often glossed over by vendors and operators alike. 

Like all data dependent technologies, one has first to have the ability to collect, normalize, sanitize and clean data before storing it for useful analysis. In an environment as idiosyncratic as a telecoms network, this is not an easy task. Not only networks are composed of a mix of appliances, virtual machines and cloud native functions, they have had successive technological generations deployed along each other, with different data schema, protocols, interface, repository which makes the extraction arduous. After that step, normalization is necessary to ensure that the data is represented the same way, with the same attributes, headers, … so that it can be exploited. Most vendors have their proprietary data schemes or “augment” standard with “enhanced” headers and metadata. In many case the data need to be translated in a format that can be normalized for ingestion. The cleaning and sanitizing is necessary to ensure that redundant or outlying data points do not overweight the data set. As always, “garbage in / garbage out” is an important concept to keep in mind. 

These difficult steps are unfortunately not the only prerequisite for an AI native network. The part that is often overlooked is that the network has to be somewhat cloud native to take full advantage of AI. The automation in telecoms networks requires interfaces and APIs to be defined, open and available at every layer, from access to transport to the core, from the physical to the virtual and cloud native infrastructure. NFV, SDN, network disaggregation, open optical, open RAN, service based architecture, … are some of the components that can enable a network to take full advantage of AI. 
Cloud networks and data centers seem to be the first to adopt AI, both for the hosting of the voracious GPUs necessary to train the Large Language Models and for the resale / enablement of AI oriented companies. 

For that reason, the more recent greenfield networks that have been recently deployed with the state of the art cloud native technologies should be the prime candidates for AI / ML based network planning, deployment and optimization. The amount of work necessary for the integration and deployment of AI native functions is objectively much lower than their incumbent competitors. 
We haven’t really seen sufficient evidence that this level of cloud "nativeness" enables mass optimization and automation with AI/ML that would result in massive cost savings in at least OPEX, creating a unfair competitive advantage against their incumbents. 

As the industry approaches Mobile World Congress 2024, with companies poised to showcase their AI capabilities, it is crucial to remain cognizant of the necessary prerequisites for these technologies to deliver tangible benefits. Understanding the time and effort required for networks to truly benefit from AI is essential in assessing the realistic impact of these advancements in the telecom sector.