Friday, October 20, 2023

FYUZ 2023 review and opinions on latest Open RAN announcements

 

Last week marked the second edition of FYUZ, the Telecom Infra Project's annual celebration of open and disaggregated networks. TIP's activity, throughout the year, provides a space for innovation and collaboration in telecoms network access, transport and core main domains. The working groups create deployment blueprints as well as implementation guidelines and documentation. The organization also federates a number of open labs, facilitating interoperability, conformance and performance testing.

I was not there are for the show's first edition, last year, but found a lot of valuable insight in this year's. I understand from casual discussion with participants that this year was a little smaller than last, probably due to the fact that the previous edition saw Meta presenting its Metaverse ready networks strategy, which attracted a lot of people outside the traditional telco realm. AT about 1200 attendees, the show felt busy without being overwhelming and the mix of main stage conference content in the morning  and breakout presentations in the afternoon left ample time for sampling the top notch food and browsing the booth. What I found very different in that show also, was how approachable and relaxed attendees were, which allowed for productive and yet casual discussions.

Even before FYUZ, the previous incarnation of the show, the TIP forum was a landmark show for vendors and operators announcing their progress on open and disaggregated networks, particularly around open RAN.

The news that came out of the show this year marked an interesting progress in the technology's implementation, and a possible transition from the trough of disillusion to a pragmatic implementation.

The first day saw big announcements from Santiago Tenorio, TIP's chairman and head of Open RAN at Vodafone. The operator announced that Open RAN's evaluation and pilots were progressing well and that it would, in its next global RFQ for RAN refresh, affecting over 125,000 cell sites see Open RAN gain at least 30% of the planned deployment. The RFQ is due to be released this year for selection in early 2024, as their contracts with existing vendors are due to expire in April 2025.

That same day, Ericsson’s head of networks, Fredrik Jejdling, confirmed the company's support of Open RAN announced earlier this year. You might have read my perspective on Ericsson's stance on Open RAN, the presentation did not change my opinion, but it is a good progress for the industry that the RAN market leader is now officially supporting the technology, albeit with some caveats.

Nokia, on their side announced a 5G Open RAN pilot with Vodafone in Italy, and another pilot successfully completed in Romania, on a cluster of Open RAN sites shared by Orange and Vodafone (MOCN).

While TIP is a traditional conduit for the big 5 European operators to enact their Open RAN strategy, this year saw an event dominated by Vodafone, with a somewhat subdued presence from Deutsche Telekom, Telefonica, Orange and TIM. Rakuten Symphony was notable by its absence, as well as Samsung.

The subsequent days saw less prominent announcements, but good representation and panel participation from Open RAN supporters and vendors. Particularly, Mavenir and Juniper networks were fairly vocal about late Open RAN joiners who do not really seem to embrace multivendor competition and open API / interfaces approach.


I was fortunate to be on a few panels, notably on the main stage to discuss RAN intelligence progress, particularly around the RICs and Apps emergence as orchestration and automation engines for the RAN.

I also presented the findings of my report on the topic, presentation below and moderated a panel on overcoming automation challenges in telecom networks with CI/CD/CT.


Wednesday, October 18, 2023

Generative AI and Intellectual Property

Since the launch of ChatGPT, Generative Artificial Intelligence and Large Language Models have gained an extraordinary popularity and agency in a very short amount of time. As we are all playing around with the most approachable use cases to generate texts, images and videos, governments, global organizations and companies are busy developing the technology; and racing to harness the early mover's advantage this disruption will bring to all areas of our society.

I am not a specialist in the field and my musings might be erroneous here, but it feels that the term  Gen AI might be a little misguiding, since a lot of the technology relies on vast datasets that are used to assemble composite final products. Essentially, the creation aspect is more an assembly than a pure creation. One could object that every music sheet is just an assembly of notes and that creation is still there, even as the author is influenced by their taste and exposure to other authors... Fair enough, but in the case of document / text creation, it feels that the use of public information to synthetize a document is not necessarily novel.

In any case, I am an information worker, most times a labourer, sometimes an artisan but in any case I live from my intellectual property. I chose to make some of that intellectual property available license free here on this blog, while a larger part is sold in the form of reports, workshops, consulting work, etc... This work might or not be license-free but it is in always copyrighted, meaning that I hold the rights to the content and allow its distribution under specific covenants.

It strikes me that, as I see crawlers go through my blog and indexing the content I make publicly available, it serves two purposes at odds with each other. The first, allows my content to be discovered and to reach a larger audience, which benefits me in terms of notoriety and increased business. The second, more insidious not only indexes but mines my content to aggregate in LLMs so that it can be regurgitated and assembled by an AI. It could be extraordinarily difficult to apportion an AI's rendition of an aggregated document to its source, but it feels unfair that copyrighted content is not attributed.

I have playing with the idea of using LLM for creating content. Anyone can do that with prompts and some license-free software, but I am fascinated with the idea of an AI assistant that would be able to write like me, using my semantics and quirks and that I could train through reinforcement learning from human feedback. Again, this poses some issues. To be effective, this AI would have to have access to my dataset, the collection of intellectual property I have created over the years. This content is protected and is my livelihood, so I cannot part with it with a third party without strict conditions. That rules out free software that can reuse whatever content you give it to ingest.

With licensed software, I am still not sure the right mechanisms are in place for copyright and content protection and control, so that I can ensure that the content I feed to the LLM remains protected and accessible only to me, while the LLM can ingest other content from license free public domain to enrich the dataset.

Are other information workers worried that LLM/AI reuses their content without attribution? Is it time to have a conversation about Gen AI, digital rights management and copyright?

***This blog post was created organically without assistance from Gen AI, except from the picture created from Canva.com 

Tuesday, October 3, 2023

Should regulators forfeit spectrum auctions if they cant resolve Net Neutrality / Fair Share?

I have been
writing about Net Neutrality and Fair Share broadband usage for nearly 10 years. Both sides of the argument have merit and it is difficult to find a balanced view represented in the media these days. Absolutists would lead you to believe that internet usage should be unregulated with everyone able to stream, download, post anything anywhere, without respect for intellectual property or fair usage; while on the other side of the fence, service provider dogmatists would like to control, apportion, prioritize and charge based on their interests.

Of course, the reality is a little more nuanced. A better understanding of the nature and evolution of traffic, as well as the cost structure of networks help to appreciate the respective parties' stance and offer a better view on what could be done to reduce the chasm.

  1. From a costs structure's perspective first, our networks grow and accommodate demand differently whether we are looking at fixed line / cable / fibre broadband or mobile. 
    1. In the first case, capacity growth is function of technology and civil works. 
      1. On the technology front, the evolution to dial up / PSTN  to copper and fiber increases dramatically to network's capacity and has followed ~20 years cycles. The investments are enormous and require the deployment, management of central offices and their evolution to edge compute date centers. These investments happen in waves within a relatively short time frame (~5 years). Once operated, the return on investment is function of the number of users and the utilisation rate of the asset, which in this case means filling the network with traffic.
      2. On the civil works front, throughout the technology evolution, a continuous work is ongoing to lay transport fiber along new housing developments, while replacing antiquated and aging copper or cable connectivity. This is a continuous burn and its run rate is function of the operator's financial capacity.
    2. In mobile networks, you can find similar categories but with a much different balance and impact on ROI.
      1. From a technology standpoint, the evolution from 1G to 5G has taken roughly 10 years per cycle. A large part of the investment for each generation is a spectrum license leased from the regulating / government. In addition to this, most network elements, from the access to the core and OSS /BSS need to be changed. The transport part relies in large part on the fixed network above. Until 5G, most of these elements were constituted of proprietary servers and software, which meant a generational change induced a complete forklift upgrade of the infrastructure. With 5G, the separation of software and hardware, the extensive use of COTS hardware and the implementation of cloud based separation of traffic and control plane, should mean that the next generational upgrade will be les expensive with only software and part of the hardware necessitating complete refresh.
      2. The civil work for mobile network is comparable to the fixed network for new coverage, but follows the same cycles as the technology timeframe with respect to upgrades and changes necessary to the radio access. Unlike the fixed network, though, there is an obligation of backwards compatibility, with many networks still running 2G, 3G, 4G while deploying 5G. The real estate being essentially antennas and cell sites, this becomes a very competitive environment with limited capacity for growth in space, pushing service providers to share assets (antennas, spectrum, radios...) and to deploy, whenever possible, multi technology radios.
The conclusion here is that you have fixed networks with long investment cycles and ROI, low margin, relying on number of connections and traffic growth. The mobile networks has shorter investment cycles, bursty margin growth and reduction with new generations.

What does this have to do with Net Neutrality / Fair Share? I am coming to it, but first we need to examine the evolution of traffic and prices to understand where the issue resides.

Now, in the past, we had to pay for every single minute, text, kb received or sent. Network operators were making money of traffic growth and were pushing users and content providers to fill their networks. Video somewhat changed that. A user watching a 30 seconds video doesn't really care / perceive if the video is at 720, 1080 or 4K, 30 or 60 fps. It is essentially the same experience. That same video, though can have a size variation of 20x depending on its resolution. To compound that issue, operators have foolishly transitioned to all you can eat data plans with 4G to acquire new consumers, a self inflicted wound that has essentially killed their 5G business case.

I have written at length about the erroneous assumptions that are underlying some of the discourses of net neutrality advocates. 

In order to understand net neutrality and traffic management, one has to understand the different perspectives involved.
  • Network operators compete against each other on price, coverage and more importantly network quality. In many cases, they have identified that improving or maintaining quality of Experience is the single most important success factor for acquiring and retaining customers. We have seen it time and again with voice services (call drops, voice quality…), messaging (texting capacity, reliability…) and data services (video start, stalls, page loading time…). These KPI are the heart of the operator’s business. As a result, operators tend to either try to improve or control user experience by deploying an array of traffic management functions, etc...
  • Content providers assume that highest quality of content (8K UHD for video for instance) equals maximum experience for subscriber and therefore try and capture as much network resource as possible to deliver it. Browser / apps / phone manufacturers also assume that more speed equals better user experience, therefore try to commandeer as much capacity as possible. 
The flaw here is the assumption that the optimum is the product of many maxima self-regulated by an equal and fair apportioning of resources. This shows a complete ignorance of how networks are designed, how they operate and how traffic flows through these networks.

This behavior leads to a network where resources can be in contention and all end-points vie for priority and maximum resource allocation. From this perspective one can understand that there is no such thing as "net neutrality" at least not in wireless networks. 

When network resources are over-subscribed, decisions are taken as to who gets more capacity, priority, speed... The question becomes who should be in position to make these decisions. Right now, the laissez-faire approach to net neutrality means that the network is not managed, it is subjected to traffic. When in contention, resources are managing traffic based on obscure rules in load balancers, routers, base stations, traffic management engines... This approach is the result of lazy, surface thinking. Net neutrality should be the opposite of non-intervention. Its rules should be applied equally to networks, devices / apps/browsers and content providers if what we want to enable is fair and equal access to resources.

As we are contemplating 6G, and hints of metaverse, augmented / mixed reality and hyper connectivity, the cost structure of network infrastructure hasn't yet been sufficiently decoupled from traffic growth and as we have seen, video is elastic and XR will be a heavy burden on the networks. Network operators have essentially failed so far to offer attractive digital services that would monetize their network investments. Video and digital services providers are already paying for their on premise and cloud infrastructure as well as transport, there is little chance they would finance telco operators for capacity growth.

Where does this leave us? It might be time for regulators / governments to either take an active and balanced role in Net Neutrality and Fair share to ensure that both side can find a sustainable business model or to forfeit spectrum auctions for next generations.

Monday, October 2, 2023

DOCOMO's 30% TCO Open RAN savings

DOCOMO announced last week, during Mobile World Congress Las Vegas the availability of its OREX offering for network operators. OREX, which stands for Open RAN Experience, was initially introduced by the Japanese operator in 2021 as OREC (Open RAN Ecosystem).

The benefits claimed by DOCOMO are quite extraordinary, as they expect to "reduce clients’ total cost of ownership by up to 30% when the costs of initial setup and ongoing maintenance are taken into account. It can also reduce the time required for network design by up to 50%. Additionally, OREX reduces power consumption at base stations by up to 50%".

The latest announcement clarifies DOCOMO's market proposition and differentiation. Since the initial communications of OREX, DOCOMO was presenting to the market a showcase of validated Open RAN blueprint deployments that the operator had carried out in its lab. What was unclear was the role DOCOMO wanted to play. Was the operator just offering best practice and exemplar implementation or were they angling for a different  play? The latest announcement clarifies DOCOMO's ambitions.

On paper, the operator showed an impressive array of vendors, collaborating to provide multi vendor Open RAN deployments, with choices and some possible permutations between each element of the stack. 


At the server layer, OREX provided options from DELL, HP and Fujitsu, all on x86 platforms, with various acceleration ASICS/FPGA... from Intel FlexRAN, Qualcomm, AMD and nvidia. While the COTS servers are readily interchangeable, the accelerator layer binds the open RAN software vendor and is not easily swappable.

At the virtualization O-Cloud layer, DOCOMO has integrated vmware, Red Hat, and WNDRVR which represents the current best of breed in that space.

The base station software CU / DU has seen implementations from Mavenir, NTT Data, and Fujitsu. 

What is missing in this picture and a little misleading is the Open Radio Unit vendors that have participated in these setups, since this where network operators need the most permutability. As of today, most Open RAN multi vendor deployments will see a separate vendor in the O-RU and CU/DU space. This is due to the fact that no single vendor today can satisfy the variety of O-RUs necessary to meet all spectrum / form factors a brownfield operator needs. More details about this in my previous state of Open RAN post here.

In this iteration, DOCOMO has clarified the O-RU vendors it has worked with most recently (Dengyo Technology, DKK Co, Fujitsu, HFR, Mavenir, and Solid). As always the devil is in the detail and unfortunately DOCOMO falls short from providing  a more complete view of the types of O-RU (mMIMO or small cell?) and the combination of O-RU vendor - CU/DU vendor - Accelerator vendor - band, which is ultimately the true measure of how open this proposition would be.

What DOCOMO clarifies most in this latest iteration, is their contribution and the role they expect to play in the market space. 

First, DOCOMO introduces their Open RAN compliant Service Management and Orchestration (SMO). This offering is a combination of NTT DOCOMO developments and third party contributions (details can be found in my report and workshop Open RAN RICs and Apps 2023). The SMO is DOCOMO's secret sauce when it comes to the claimed savings, resulting mainly from automation of design, deployment and maintenance of the Open RAN systems, as well as RU energy optimization.


At last, DOCOMO presents their vast integration experience and is now proposing these systems integration, support and maintenance services. The operator seeks the role of specialized SI and prime contractor for these O-RAN projects.

While DOCOMO's experience is impressive and has led many generations of network innovation, the latest movement to transition from leading operator and industry pioneer to O-RAN SI and vendor is reminiscent of other Japanese companies such as Rakuten with their Symphony offering. Japanese operators and vendors see the contraction of their domestic market as a strategic threat to their core business and they try to replicate their success overseas. While quite successful in greenfield environments, the hypothesis that brownfield operators (particularly tier 1) will buy technology and services from another carrier (even if not geographically competing) still needs to be validated.