Monday, December 4, 2023

Is this the Open RAN tipping point: AT&T, Ericsson, Fujitsu, Nokia, Mavenir


The latest publications around Open RAN deliver a mixed bag of progress and skepticism. How to interpret these conflicting information?

A short retrospective of the most recent news:

On the surface, Open RAN seems to be benefiting from a strong momentum and delivering on its promise of disrupting traditional RAN with the introduction of new suppliers, together with the opening of traditional architecture to a more disaggregated and multi vendor model. The latest announcement from AT&T and Ericsson even would point out that the promise of reduced TCO for brownfield deployments is possible:
AT&T's yearly CAPEX guidance is supposed to reduce from a high of ~$24B to about 20B$ per year starting in 2024. If the 14B$ for 5 years spent on Ericsson RAN yield the announced 70% of traffic on Open RAN infrastructure, AT&T might have dramatically improved their RAN CAPEX with this deal.

What is driving these announcements?

For network operators, Open RAN has been about strategic supply chain diversification. The coalescence of the market into an oligopoly, and a duopoly after the exclusion of Chinese vendors to a large number of Western Networks has created an unfavorable negotiating position for the carriers. The business case of 5G relies heavily on declining costs or rather a change in the costs structure of deploying and operating networks. Open RAN is an element of it, together with edge computing and telco clouds.

For operators

The decision to move to Open RAN is mostly not any longer up for debate. While the large majority of brownfield networks will not completely transition to Open RAN they will introduce the technology, alongside the traditional architecture, to foster cloud native networks implementations. It is not a matter of if but a matter of when.
When varies for each market / operator. Operators do not roll out a new technology just because it makes sense even if the business case is favorable. A window of opportunity has to present itself to facilitate the introduction of the new technology. In the case of Open RAN, the windows can be:
  • Generational changes: 4G to 5G, NSA to SA, 5G to 6G
  • Network obsolescence: the RAN contracts are up for renewal, the infrastructure is aging or needs a refresh. 
  • New services: private networks, network slicing...
  • Internal strategy: transition to cloud native, personnel training, operating models refresh
  • Vendors weakness: Nothing better than an end of quarter / end of year big infrastructure bundle discount to secure and alleviate the risks of introducing new technologies

For traditional vendors

For traditional vendors, the innovator dilemma has been at play. Nokia has endorsed Open RAN early on, with little to show for it until recently, convincingly demonstrating multi vendor integration and live trials. Ericsson, as market leader has been slower to endorse Open RAN has so far adopted it selectively, for understandable reasons.

For emerging vendors

Emerging vendors have had mixed fortunes with Open RAN. The early market leader, Altiostar was absorbed by Rakuten which gave the market pause for ~3 years, while other vendors caught up. Mavenir, Samsung, Fujitsu and others offer credible products and services, with possible multi vendors permutations. 
Disruptors, emerging and traditional vendors are all battling in RAN intelligence and orchestration market segment, which promises to deliver additional Open RAN benefits (see link).


Open RAN still has many challenges to circumvent to become a solution that can be adopted in any network, but the latest momentum seem to show progress for the implementation of the technology at scale.
More details can be found through my workshops and advisory services.



Thursday, November 23, 2023

Announcing Private Networks 2024


Telecoms cellular networks, delivered by network operators, have traditionally been designed to provide coverage and best effort performance for consumers' general use. This design prioritizes high population density areas, emphasizing cost-effective delivery of coverage solutions with a network architecture treating all connections uniformly, effectively sharing available bandwidth. In some markets, net neutrality provisions further restrict the prioritization of devices, applications, or services over others.

Enterprises, governments, and organizations often turn to private networks due to two primary reasons. First, there may be no commercial network coverage in their operational areas. Second, even when commercial networks are present, they may fail to meet the performance requirements of these entities. Private networks offer a tailored solution, allowing organizations to have dedicated, secure, and high-performance connectivity, overcoming limitations posed by commercial networks.

Enterprise, industries, and government IT departments have developed a deep understanding of their unique connectivity requirements over the years. Recognizing the critical role that connectivity plays in their operations, these entities have sought solutions that align closely with their specific needs. Before the advent of 5G technology, Wi-Fi emerged as a rudimentary form of private networks, offering a more localized and controlled connectivity option compared to traditional cellular networks. However, there were certain limitations and challenges associated with Wi-Fi, and the costs of establishing and operating fully-fledged private networks were often prohibitive.

Enterprises, industries, and government organizations operate in diverse and complex environments, each with its own set of challenges and requirements. These entities understand that a one-size-fits-all approach to connectivity is often inadequate. Different sectors demand varied levels of performance, security, and reliability to support their specific applications and processes. This understanding has driven the search for connectivity solutions that can be tailored to meet the exacting standards of these organizations.

Wi-Fi technology emerged as an early solution that provided a degree of autonomy and control over connectivity. Enterprises and organizations adopted Wi-Fi to create local networks within their premises, enabling wireless connectivity for devices and facilitating communication within a confined area. Wi-Fi allowed for the segmentation of networks, offering a level of privacy and control that was not as pronounced in traditional cellular networks.

However, Wi-Fi also came with its limitations. Coverage areas were confined, and the performance could be affected by interference and congestion, especially in densely populated areas. Moreover, the security protocols of Wi-Fi, while evolving, were not initially designed to meet the stringent requirements of certain industries, such as finance, healthcare, or defense.

Establishing and operating private networks before the advent of 5G technology posed significant financial challenges. The infrastructure required for a dedicated private network, including base stations, networking equipment, and spectrum allocation, incurred substantial upfront costs. Maintenance and operational expenses added to the financial burden, making it cost-prohibitive for many enterprises and organizations to invest in private network infrastructure.

Moreover, the complexity of managing and maintaining a private network, along with the need for specialized expertise, further elevated the costs. These challenges made it difficult for organizations to justify the investment in a private network, especially when commercial networks, despite their limitations, were more readily available and appeared more economically feasible.

The arrival of 5G technology has acted as a game-changer in the landscape of private networks. 5G offers the potential for enhanced performance, ultra-low latency, and significantly increased capacity. These capabilities address many of the limitations that were associated with Wi-Fi and earlier generations of cellular networks. The promise of 5G has prompted enterprises, industries, and government entities to reassess the feasibility of private networks, considering the potential benefits in terms of performance, security, and customization.

The growing trend of private networks can be attributed to several key factors:

  • Performance Customization: Private networks enable enterprises and organizations to customize their network performance according to specific needs. Unlike commercial networks that provide best effort performance for a diverse consumer base, private networks allow for tailored configurations that meet the unique demands of various industries
  • Security and Reliability: Security is paramount for many enterprises and government entities. Private networks offer a higher level of security compared to public networks, reducing the risk of cyber threats and unauthorized access. Additionally, the reliability of private networks ensures uninterrupted operations critical for sectors like finance, healthcare, and defense.
  • Critical IoT and Industry 4.0 Requirements: The increasing adoption of Industrial IoT (IIoT) and Industry 4.0 technologies necessitates reliable and low-latency connectivity. Private networks provide the infrastructure required for seamless integration of IoT devices, automation, and real-time data analytics crucial for modern industrial processes.
  • Capacity and Bandwidth Management: In sectors with high data demands, such as smart manufacturing, logistics, and utilities, private networks offer superior capacity and bandwidth management. This ensures that enterprises can handle large volumes of data efficiently, supporting data-intensive applications without compromising on performance.
  • Flexibility in Deployment: Private networks offer flexibility in deployment, allowing organizations to establish networks in remote or challenging environments where commercial networks may not be feasible. This flexibility is particularly valuable for industries such as mining, agriculture, and construction.
  • Compliance and Control: Enterprises often operate in regulated environments, and private networks provide greater control over compliance with industry-specific regulations. Organizations can implement and enforce their own policies regarding data privacy, network access, and usage.
  • Edge Computing Integration: With the rise of edge computing, private networks seamlessly integrate with distributed computing resources, reducing latency and enhancing the performance of applications that require real-time processing. This is particularly advantageous for sectors like healthcare, where quick data analysis is critical for patient care.

As a result of these factors, the adoption of private networks is rapidly becoming a prominent industry trend. Organizations across various sectors recognize the value of tailored, secure, and high-performance connectivity that private networks offer, leading to an increasing shift away from traditional reliance on commercial cellular networks. This trend is expected to continue as technology advances and industries increasingly prioritize efficiency, security, and customized network solutions tailored to their specific operational requirements.

With the transformative potential of 5G, these entities are now reevaluating the role of private networks, anticipating that the advancements in technology will make these networks more accessible, cost-effective, and aligned with their specific operational requirements.

Terms and conditions available on demand: patrick.lopez@coreanalysis.ca  

Monday, November 13, 2023

RAN Intelligence leaders 2023


RAN intelligence is an emerging market segment composed of RAN Intelligent Controllers (RICs) and their associated Apps. I have been researching this field for the last two years and after an exhaustive analysis of the vendors and operators offerings and strategies, I am glad to publish here an extract of my findings. A complete review of the findings and rankings can be found through the associated report or workshop (commercial products).

The companies who participated in this study are AccelleRAN, AIRA, Airhop, Airspan, Cap Gemini, Cohere Technologies, Ericsson, Fujitsu, I-S Wireless, Juniper, Mavenir, Nokia, Northeastern. NTT Docomo, Parallel Wireless, Radisys, Rakuten Symphony, Rimedo Labs, Samsung, Viavi, VMWare.

They were separated in two overall categories:

  • Generalists: companies offering both RIC(s) and Apps implementations
  • Specialists: companies offering only Apps

The Generalist ranking is:



#1 Mavenir
#2 ex aequo Juniper and vmware
#4 Cap Gemini



The Specialists ranking is:



#1 Airhop
#2 Rimedo Labs
#3 Cohere Technologies



The study features a review of a variety of established and emerging vendors in the RAN space. RAN intelligence is composed of:

  • Non Real Time RIC - a platform for RIC intelligence necessitating more than 1 second to process and create feedback loops to the underlying infrastructure. This platform is an evolution of SON (Self Organizing Networks) systems, RAN EMS (Element Management Systems) and OSS (Operations Support Systems). The Non RT RIC is part of the larger SMO (Service Management and Orchestration) framework.
  • rApps -  Applications built on top of the Non RT RIC platform.
  • Near Real Time RIC - a platform for RIC intelligence necessitating less than 1 second to process and create feedback loops to the underlying infrastructure. This platform is a collection of capabilities today embedded within the RUs (Radio Units), DUs (Distributed Units) and CUs (Centralized Units).
  • xApps - Applications built on top of the Near RT RIC platform.
The vendors and operators were ranked on their strategy, vision and implementation across six dimensions, based on primary research from interviews, publicly available information, Plugfests participation and deployments observation:
  • Platform - the ability to create a platform and a collection of processes facilitating the developers' capability to create Apps that can be ported from one vendor to the other with minimum adaptation. Considerations were given to Apps lifecycle management, maturity of APIs / SDK, capability to create enabling apps / processes for hosted Apps.
  • Integrations / partnerships - one of the key tenets of Open RAN is the multi vendor or vendor agnostic implementation. From this perspective, companies that gave demonstrated their integration capabilities in multi vendor environments of the hosting of third party applications were ranked higher.
  • Non Real Time RIC - ranking the vision, implementation and maturity of the Non RT RIC capabilities.
  • Near Real Time RIC - ranking the vision, implementation and maturity of the Near RT RIC capabilities.
  • rApps - ranking the vision, implementation and maturity of the rApps offering
  • xApps - ranking the vision, implementation and maturity of the xApps offering

Tuesday, November 7, 2023

What's behind the operators' push for network APIs?

 


As I saw the latest announcements from GSMA, Telefonica and Deutsche Telekom, as well as the asset impairment from Ericsson on Vonage's acquisition, I was reminded of the call I was making three years ago for the creation of operators platforms.

One one hand, 21 large operators (namely, America Movil, AT&T, Axiata, Bharti Airtel, China Mobile, Deutsche Telekom, e& Group, KDDI, KT, Liberty Global, MTN, Orange, Singtel, Swisscom, STC, Telefónica, Telenor, Telstra, Telecom Italia (TIM), Verizon and Vodafone) within the GSMA launch an initiative to open their networks to developers with the launch of 8 "universal" APIs (SIM Swap, Quality on Demand, Device Status, Number Verification, Simple Edge Discovery, One Time Password SMS, Carrier Billing – Check Out and Device Location). 

Additionally, Deutsche Telekom was first to pull the trigger on the launch of their own gateway "MagentaBusiness API" based on Ericsson's depreciated asset. The 3 APIs launched are Quality-on-demand, Device Status – Roaming and Device Location, with more to come.

Telefonica, on their side launched shortly after DT their own Open Gateway offering with 9 APIs (Carrier Billing, Know your customer, Number verification, SIM Swap, QOD, Device status, Device location, QOD wifi and blockchain public address).

On the other hand, Ericsson wrote off 50% of the Vonage acquisition, while "creating a new market for exposing 5G capabilities through network APIs".

Dissonance much? why are operators launching network APIs in fanfare and one of the earliest, largest vendor in the field reporting asset depreciation while claiming a large market opportunity?

The move for telcos to exposing network APIs is not new and has had a few unsuccessful aborted tries (GSMA OneAPI in 2013, DT's MobiledgeX launch in 2019). The premises have varied over time, but the central tenet remains the same. Although operators have great experience in rolling out and operating networks, they essentially have been providing the same connectivity services to all consumers, enterprises and governmental organization without much variation. The growth in cloud networks is underpinned by new generations of digital services, ranging from social media, video streaming for consumers and cloud storage, computing, CPaaS and IT functions cloud migration for enterprises. Telcos have been mostly observers in this transition, with some timid tries to participate, but by and large, they have been quite unsuccessful in creating and rolling out innovative digital services. As Edge computing and Open RAN RIC become possibly the first applications forcing telcos to look at possible hyperscaler tie-ins with cloud providers, it raises several strategic questions.

Telcos have been using cloud fabric and porting their vertical, proprietary systems to cloud native environment for their own benefit. As this transition progresses, there is a realization that private networks growth are a reflection of enterprises' desire to create and manage their connectivity products themselves. While operators have been architecting and planning their networks for network slicing, hoping to sell managed connectivity services to enterprises, the latter have been effectively managing their connectivity, in the cloud and in private networks themselves without the telcos' assistance. This realization leads to an important decision: If enterprises want to manage their connectivity themselves and expand that control to 5G / Cellular, should Telcos let them and if yes, by what means?

The answer is in network APIs. Without giving third party access to the network itself, the best solution is to offer a set of controlled, limited, tools that allow to discover, reserve and consume network resources while the operator retains the overall control of the network itself. There are a few conditions for this to work. 

The first, is essentially the necessity for universal access. Enterprises and developers have gone though the learning curve of using AWS, Google cloud and Azure tools, APIs and semantic. They can conceivably see value in learning a new set with these Telco APIs, but wont likely go through the effort if each Telco has a different set in different country.

The second, and historically the hardest for telcos is to create and manage an ecosystem and developer community. They have tried many times and in different settings, but in many cases have failed, only enlisting friendly developers, in the form of their suppliers and would be suppliers, dedicating efforts to further their commercial opportunities. The jury is still out as to whether this latest foray will be successful in attracting independent developers.

The third, and possibly the most risky part in this equation, is which APIs would prove useful and whether the actual premise that enterprises and developers will want to use them is untested. Operators are betting that they can essentially create a telco cloud experience for developers more than 15 years after AWS launched, with less tools, less capacity to innovate, less cloud native skills and a pretty bad record in nurturing developers and enterprises.

Ericsson's impairment of Vonage probably acknowledges that the central premise that Telco APIs are desirable is unproven, that if it succeeds, operators will want to retain control and that there is less value in the platform than in the APIs themselves (the GSMA launch on an open source platform essentially directly depreciates the Vonage acquisition).

Another path exist, which provides less control (and commercial upside) for Telcos, where they would  host third party cloud functions in their networks, even allowing third party cloud infrastructure (such as Amazon Outpost for instance) to be collocated in their data centers. This option comes with the benefit of an existing ecosystem, toolset, services and clients, just extending the cloud to the telco network. The major drawback is that the telco accepts their role as utility provider of connectivity with little participation in the service value creation.

Both scenarios are being played out right now and both paths represent much uncertainty and risks for operators that do not want to recognize the strategic implications of their capabilities.


Friday, November 3, 2023

Telco edge compute, RAN and AI


In recent years, the telecommunications industry has witnessed a profound transformation, driven by the rapid penetration of cloud technologies. Cloud Native Functions have become common in the packet core, OSS BSS, transport and are making their way in the access domain, both fixed and mobile. CNFs mean virtual infrastructure management and data centers have become an important part of network capex strategies. 

While edge computing in telecoms, with the emergence of MEC (Multi Access Edge Computing), has been mostly confined to telco network functions (UPF, RAN CU/DU...) network operators should now explore the opportunities for retail and wholesale of edge computing services. My workshop examines in details the strategies, technologies and challenges associated with this opportunity.

Traditional centralized cloud infrastructure is being augmented with edge computing, effectively bringing computation and data storage closer to the point of data generation and consumption.

What are the benefits of edge computing for telecom networks?

  • Low Latency: One of the key advantages of edge computing is its ability to minimize latency. This is of paramount importance in telecoms, especially in applications like autonomous vehicles, autonomous robots / manufacturing, and remote-controlled machinery.
  • Bandwidth Efficiency: Edge computing reduces the need for transmitting massive volumes of data over long distances, which can strain network bandwidth. Instead, data processing and storage take place at the edge, significantly reducing the burden on core networks. This is particularly relevant for machine vision, video processing and AI use cases.
  • Enhanced Security: Edge computing offers improved security by allowing sensitive data to be processed locally. This minimizes the exposure of critical information to potential threats in the cloud. Additionally, privacy, data sovereignty and residency concerns can be efficiently addressed by local storage / computing.
  • Scalability: Edge computing enables telecom operators to scale resources as needed, making it easier to manage fluctuating workloads effectively.
  • Simpler, cheaper devices: Edge computing allows devices to be cheaper and simpler while retaining sophisticated functionalities, as storage, processing can be offloaded to a nearby edge compute facility.

Current Trends in Edge Computing for Telecoms

The adoption of edge computing in telecoms is rapidly evolving, with several trends driving the industry forward:

  • 5G and private networks Integration: The deployment of 5G networks is closely intertwined with edge computing. 5G's high data transfer rates and low latency requirements demand edge infrastructure to deliver on its promises effectively. The cloud RAN and service based architecture packet core functions drive demand in edge computing for the colocation of UPF and CU/DU functions, particularly for private networks.
  • Network Slicing: Network operators are increasingly using network slicing to create virtualized network segments, allowing them to allocate resources and customize services for different applications and use cases.
  • Ecosystem Partnerships: Telcos are forging partnerships with cloud providers, hardware manufacturers, and application developers to explore retail and wholesale edge compute services.

Future Prospects

The future of edge computing in telecoms offers several exciting possibilities:
  • Edge-AI Synergy: As artificial intelligence becomes more pervasive, edge computing will play a pivotal role in real-time AI processing, enhancing applications such as facial recognition, autonomous drones, and predictive maintenance. Additionally, AI/ML is emerging as a key value proposition in a number of telco CNFs, particularly in the access domain, where RAN intelligence is key to optimize spectrum and energy usage, while tailoring user experience.
  • Industry-Specific Edge Solutions: Different industries will customize edge computing solutions to cater to their unique requirements. This could result in the development of specialized edge solutions for healthcare, manufacturing, transportation, and more.
  • Edge-as-a-Service: Telecom operators are likely to offer edge services as a part of their portfolio, allowing enterprises to deploy and manage edge resources with ease.
  • Regulatory Challenges: As edge computing becomes more integral to telecoms, regulatory challenges may arise, particularly regarding data privacy, security, and jurisdictional concerns.

New revenues streams can also be captured with the deployment of edge computing.

  • For consumers, it is likely that the lowest hanging fruit in the short term is in gaming. While hyperscalers and gaming companies have launched their own cloud gaming services, their success has been limited due to the poor online experience. The most successful game franchises are Massive Multiplayer Online. They pitch dozens of players against each other and require a very controlled latency between all players for a fair and enjoyable gameplay. Only operators can provide controlled latency if they deploy gaming servers at the edge. Without a full blown gaming service, providing game caching at the edge can drastically reduce the download time for games, updates and patches, which increases dramatically player's service satisfaction.
  • For enterprise users, edge computing has dozens of use cases that can be implemented today that are proven to provide superior experience compared to the cloud. These services range from high performance cloud storage, to remote desktop, video surveillance and recognition.
  • Beyond operators-owned services, the largest opportunity is certainly the enablement of edge as a service (EaaS), allowing cloud developers to use edge resources as specific cloud availability zones.
Edge computing is rapidly maturing in the telecom industry by enabling low-latency, high-performance, and secure services that meet the demands of new use cases. As we move forward, the integration of edge computing with 5G and the continuous development of innovative applications will shape the industry's future. Telecom operators that invest in edge computing infrastructure and capabilities will be well-positioned to capitalize on the opportunities presented by this transformative technology. 


Friday, October 20, 2023

FYUZ 2023 review and opinions on latest Open RAN announcements

 

Last week marked the second edition of FYUZ, the Telecom Infra Project's annual celebration of open and disaggregated networks. TIP's activity, throughout the year, provides a space for innovation and collaboration in telecoms network access, transport and core main domains. The working groups create deployment blueprints as well as implementation guidelines and documentation. The organization also federates a number of open labs, facilitating interoperability, conformance and performance testing.

I was not there are for the show's first edition, last year, but found a lot of valuable insight in this year's. I understand from casual discussion with participants that this year was a little smaller than last, probably due to the fact that the previous edition saw Meta presenting its Metaverse ready networks strategy, which attracted a lot of people outside the traditional telco realm. AT about 1200 attendees, the show felt busy without being overwhelming and the mix of main stage conference content in the morning  and breakout presentations in the afternoon left ample time for sampling the top notch food and browsing the booth. What I found very different in that show also, was how approachable and relaxed attendees were, which allowed for productive and yet casual discussions.

Even before FYUZ, the previous incarnation of the show, the TIP forum was a landmark show for vendors and operators announcing their progress on open and disaggregated networks, particularly around open RAN.

The news that came out of the show this year marked an interesting progress in the technology's implementation, and a possible transition from the trough of disillusion to a pragmatic implementation.

The first day saw big announcements from Santiago Tenorio, TIP's chairman and head of Open RAN at Vodafone. The operator announced that Open RAN's evaluation and pilots were progressing well and that it would, in its next global RFQ for RAN refresh, affecting over 125,000 cell sites see Open RAN gain at least 30% of the planned deployment. The RFQ is due to be released this year for selection in early 2024, as their contracts with existing vendors are due to expire in April 2025.

That same day, Ericsson’s head of networks, Fredrik Jejdling, confirmed the company's support of Open RAN announced earlier this year. You might have read my perspective on Ericsson's stance on Open RAN, the presentation did not change my opinion, but it is a good progress for the industry that the RAN market leader is now officially supporting the technology, albeit with some caveats.

Nokia, on their side announced a 5G Open RAN pilot with Vodafone in Italy, and another pilot successfully completed in Romania, on a cluster of Open RAN sites shared by Orange and Vodafone (MOCN).

While TIP is a traditional conduit for the big 5 European operators to enact their Open RAN strategy, this year saw an event dominated by Vodafone, with a somewhat subdued presence from Deutsche Telekom, Telefonica, Orange and TIM. Rakuten Symphony was notable by its absence, as well as Samsung.

The subsequent days saw less prominent announcements, but good representation and panel participation from Open RAN supporters and vendors. Particularly, Mavenir and Juniper networks were fairly vocal about late Open RAN joiners who do not really seem to embrace multivendor competition and open API / interfaces approach.


I was fortunate to be on a few panels, notably on the main stage to discuss RAN intelligence progress, particularly around the RICs and Apps emergence as orchestration and automation engines for the RAN.

I also presented the findings of my report on the topic, presentation below and moderated a panel on overcoming automation challenges in telecom networks with CI/CD/CT.


Wednesday, October 18, 2023

Generative AI and Intellectual Property

Since the launch of ChatGPT, Generative Artificial Intelligence and Large Language Models have gained an extraordinary popularity and agency in a very short amount of time. As we are all playing around with the most approachable use cases to generate texts, images and videos, governments, global organizations and companies are busy developing the technology; and racing to harness the early mover's advantage this disruption will bring to all areas of our society.

I am not a specialist in the field and my musings might be erroneous here, but it feels that the term  Gen AI might be a little misguiding, since a lot of the technology relies on vast datasets that are used to assemble composite final products. Essentially, the creation aspect is more an assembly than a pure creation. One could object that every music sheet is just an assembly of notes and that creation is still there, even as the author is influenced by their taste and exposure to other authors... Fair enough, but in the case of document / text creation, it feels that the use of public information to synthetize a document is not necessarily novel.

In any case, I am an information worker, most times a labourer, sometimes an artisan but in any case I live from my intellectual property. I chose to make some of that intellectual property available license free here on this blog, while a larger part is sold in the form of reports, workshops, consulting work, etc... This work might or not be license-free but it is in always copyrighted, meaning that I hold the rights to the content and allow its distribution under specific covenants.

It strikes me that, as I see crawlers go through my blog and indexing the content I make publicly available, it serves two purposes at odds with each other. The first, allows my content to be discovered and to reach a larger audience, which benefits me in terms of notoriety and increased business. The second, more insidious not only indexes but mines my content to aggregate in LLMs so that it can be regurgitated and assembled by an AI. It could be extraordinarily difficult to apportion an AI's rendition of an aggregated document to its source, but it feels unfair that copyrighted content is not attributed.

I have playing with the idea of using LLM for creating content. Anyone can do that with prompts and some license-free software, but I am fascinated with the idea of an AI assistant that would be able to write like me, using my semantics and quirks and that I could train through reinforcement learning from human feedback. Again, this poses some issues. To be effective, this AI would have to have access to my dataset, the collection of intellectual property I have created over the years. This content is protected and is my livelihood, so I cannot part with it with a third party without strict conditions. That rules out free software that can reuse whatever content you give it to ingest.

With licensed software, I am still not sure the right mechanisms are in place for copyright and content protection and control, so that I can ensure that the content I feed to the LLM remains protected and accessible only to me, while the LLM can ingest other content from license free public domain to enrich the dataset.

Are other information workers worried that LLM/AI reuses their content without attribution? Is it time to have a conversation about Gen AI, digital rights management and copyright?

***This blog post was created organically without assistance from Gen AI, except from the picture created from Canva.com 

Tuesday, October 3, 2023

Should regulators forfeit spectrum auctions if they cant resolve Net Neutrality / Fair Share?

I have been
writing about Net Neutrality and Fair Share broadband usage for nearly 10 years. Both sides of the argument have merit and it is difficult to find a balanced view represented in the media these days. Absolutists would lead you to believe that internet usage should be unregulated with everyone able to stream, download, post anything anywhere, without respect for intellectual property or fair usage; while on the other side of the fence, service provider dogmatists would like to control, apportion, prioritize and charge based on their interests.

Of course, the reality is a little more nuanced. A better understanding of the nature and evolution of traffic, as well as the cost structure of networks help to appreciate the respective parties' stance and offer a better view on what could be done to reduce the chasm.

  1. From a costs structure's perspective first, our networks grow and accommodate demand differently whether we are looking at fixed line / cable / fibre broadband or mobile. 
    1. In the first case, capacity growth is function of technology and civil works. 
      1. On the technology front, the evolution to dial up / PSTN  to copper and fiber increases dramatically to network's capacity and has followed ~20 years cycles. The investments are enormous and require the deployment, management of central offices and their evolution to edge compute date centers. These investments happen in waves within a relatively short time frame (~5 years). Once operated, the return on investment is function of the number of users and the utilisation rate of the asset, which in this case means filling the network with traffic.
      2. On the civil works front, throughout the technology evolution, a continuous work is ongoing to lay transport fiber along new housing developments, while replacing antiquated and aging copper or cable connectivity. This is a continuous burn and its run rate is function of the operator's financial capacity.
    2. In mobile networks, you can find similar categories but with a much different balance and impact on ROI.
      1. From a technology standpoint, the evolution from 1G to 5G has taken roughly 10 years per cycle. A large part of the investment for each generation is a spectrum license leased from the regulating / government. In addition to this, most network elements, from the access to the core and OSS /BSS need to be changed. The transport part relies in large part on the fixed network above. Until 5G, most of these elements were constituted of proprietary servers and software, which meant a generational change induced a complete forklift upgrade of the infrastructure. With 5G, the separation of software and hardware, the extensive use of COTS hardware and the implementation of cloud based separation of traffic and control plane, should mean that the next generational upgrade will be les expensive with only software and part of the hardware necessitating complete refresh.
      2. The civil work for mobile network is comparable to the fixed network for new coverage, but follows the same cycles as the technology timeframe with respect to upgrades and changes necessary to the radio access. Unlike the fixed network, though, there is an obligation of backwards compatibility, with many networks still running 2G, 3G, 4G while deploying 5G. The real estate being essentially antennas and cell sites, this becomes a very competitive environment with limited capacity for growth in space, pushing service providers to share assets (antennas, spectrum, radios...) and to deploy, whenever possible, multi technology radios.
The conclusion here is that you have fixed networks with long investment cycles and ROI, low margin, relying on number of connections and traffic growth. The mobile networks has shorter investment cycles, bursty margin growth and reduction with new generations.

What does this have to do with Net Neutrality / Fair Share? I am coming to it, but first we need to examine the evolution of traffic and prices to understand where the issue resides.

Now, in the past, we had to pay for every single minute, text, kb received or sent. Network operators were making money of traffic growth and were pushing users and content providers to fill their networks. Video somewhat changed that. A user watching a 30 seconds video doesn't really care / perceive if the video is at 720, 1080 or 4K, 30 or 60 fps. It is essentially the same experience. That same video, though can have a size variation of 20x depending on its resolution. To compound that issue, operators have foolishly transitioned to all you can eat data plans with 4G to acquire new consumers, a self inflicted wound that has essentially killed their 5G business case.

I have written at length about the erroneous assumptions that are underlying some of the discourses of net neutrality advocates. 

In order to understand net neutrality and traffic management, one has to understand the different perspectives involved.
  • Network operators compete against each other on price, coverage and more importantly network quality. In many cases, they have identified that improving or maintaining quality of Experience is the single most important success factor for acquiring and retaining customers. We have seen it time and again with voice services (call drops, voice quality…), messaging (texting capacity, reliability…) and data services (video start, stalls, page loading time…). These KPI are the heart of the operator’s business. As a result, operators tend to either try to improve or control user experience by deploying an array of traffic management functions, etc...
  • Content providers assume that highest quality of content (8K UHD for video for instance) equals maximum experience for subscriber and therefore try and capture as much network resource as possible to deliver it. Browser / apps / phone manufacturers also assume that more speed equals better user experience, therefore try to commandeer as much capacity as possible. 
The flaw here is the assumption that the optimum is the product of many maxima self-regulated by an equal and fair apportioning of resources. This shows a complete ignorance of how networks are designed, how they operate and how traffic flows through these networks.

This behavior leads to a network where resources can be in contention and all end-points vie for priority and maximum resource allocation. From this perspective one can understand that there is no such thing as "net neutrality" at least not in wireless networks. 

When network resources are over-subscribed, decisions are taken as to who gets more capacity, priority, speed... The question becomes who should be in position to make these decisions. Right now, the laissez-faire approach to net neutrality means that the network is not managed, it is subjected to traffic. When in contention, resources are managing traffic based on obscure rules in load balancers, routers, base stations, traffic management engines... This approach is the result of lazy, surface thinking. Net neutrality should be the opposite of non-intervention. Its rules should be applied equally to networks, devices / apps/browsers and content providers if what we want to enable is fair and equal access to resources.

As we are contemplating 6G, and hints of metaverse, augmented / mixed reality and hyper connectivity, the cost structure of network infrastructure hasn't yet been sufficiently decoupled from traffic growth and as we have seen, video is elastic and XR will be a heavy burden on the networks. Network operators have essentially failed so far to offer attractive digital services that would monetize their network investments. Video and digital services providers are already paying for their on premise and cloud infrastructure as well as transport, there is little chance they would finance telco operators for capacity growth.

Where does this leave us? It might be time for regulators / governments to either take an active and balanced role in Net Neutrality and Fair share to ensure that both side can find a sustainable business model or to forfeit spectrum auctions for next generations.

Monday, October 2, 2023

DOCOMO's 30% TCO Open RAN savings

DOCOMO announced last week, during Mobile World Congress Las Vegas the availability of its OREX offering for network operators. OREX, which stands for Open RAN Experience, was initially introduced by the Japanese operator in 2021 as OREC (Open RAN Ecosystem).

The benefits claimed by DOCOMO are quite extraordinary, as they expect to "reduce clients’ total cost of ownership by up to 30% when the costs of initial setup and ongoing maintenance are taken into account. It can also reduce the time required for network design by up to 50%. Additionally, OREX reduces power consumption at base stations by up to 50%".

The latest announcement clarifies DOCOMO's market proposition and differentiation. Since the initial communications of OREX, DOCOMO was presenting to the market a showcase of validated Open RAN blueprint deployments that the operator had carried out in its lab. What was unclear was the role DOCOMO wanted to play. Was the operator just offering best practice and exemplar implementation or were they angling for a different  play? The latest announcement clarifies DOCOMO's ambitions.

On paper, the operator showed an impressive array of vendors, collaborating to provide multi vendor Open RAN deployments, with choices and some possible permutations between each element of the stack. 


At the server layer, OREX provided options from DELL, HP and Fujitsu, all on x86 platforms, with various acceleration ASICS/FPGA... from Intel FlexRAN, Qualcomm, AMD and nvidia. While the COTS servers are readily interchangeable, the accelerator layer binds the open RAN software vendor and is not easily swappable.

At the virtualization O-Cloud layer, DOCOMO has integrated vmware, Red Hat, and WNDRVR which represents the current best of breed in that space.

The base station software CU / DU has seen implementations from Mavenir, NTT Data, and Fujitsu. 

What is missing in this picture and a little misleading is the Open Radio Unit vendors that have participated in these setups, since this where network operators need the most permutability. As of today, most Open RAN multi vendor deployments will see a separate vendor in the O-RU and CU/DU space. This is due to the fact that no single vendor today can satisfy the variety of O-RUs necessary to meet all spectrum / form factors a brownfield operator needs. More details about this in my previous state of Open RAN post here.

In this iteration, DOCOMO has clarified the O-RU vendors it has worked with most recently (Dengyo Technology, DKK Co, Fujitsu, HFR, Mavenir, and Solid). As always the devil is in the detail and unfortunately DOCOMO falls short from providing  a more complete view of the types of O-RU (mMIMO or small cell?) and the combination of O-RU vendor - CU/DU vendor - Accelerator vendor - band, which is ultimately the true measure of how open this proposition would be.

What DOCOMO clarifies most in this latest iteration, is their contribution and the role they expect to play in the market space. 

First, DOCOMO introduces their Open RAN compliant Service Management and Orchestration (SMO). This offering is a combination of NTT DOCOMO developments and third party contributions (details can be found in my report and workshop Open RAN RICs and Apps 2023). The SMO is DOCOMO's secret sauce when it comes to the claimed savings, resulting mainly from automation of design, deployment and maintenance of the Open RAN systems, as well as RU energy optimization.


At last, DOCOMO presents their vast integration experience and is now proposing these systems integration, support and maintenance services. The operator seeks the role of specialized SI and prime contractor for these O-RAN projects.

While DOCOMO's experience is impressive and has led many generations of network innovation, the latest movement to transition from leading operator and industry pioneer to O-RAN SI and vendor is reminiscent of other Japanese companies such as Rakuten with their Symphony offering. Japanese operators and vendors see the contraction of their domestic market as a strategic threat to their core business and they try to replicate their success overseas. While quite successful in greenfield environments, the hypothesis that brownfield operators (particularly tier 1) will buy technology and services from another carrier (even if not geographically competing) still needs to be validated. 

Monday, September 25, 2023

Is Ericsson's Open RAN stance that open?

 

An extract from the Open RAN RIC and Apps report and workshop.

Ericsson is one of the most successful Telecom Equipment Manufacturers of all time, having navigated market concentration phases, the emergence of powerful rivals from China and elsewhere, and the pitfalls of the successive generations and their windows of opportunity for new competitors to emerge.

With a commanding estimated global market share of 26.9% (39% excluding China) in RAN, the company is the uncontested leader in the space. While the geopolitical situation and the ban of Chinese vendors in many western markets has been a boon for the company’s growth, Open RAN has become the largest potential threat to their RAN business.

At first skeptical (if not outright hostile) to the new architecture, the company has been keeping an eye on its development and traction over the last years and has formulated a cautious strategy to participate and influence its development.

In 2023, Ericsson seems to have accepted that Open RAN is likely to stay and represents both a threat and opportunity for its telecom business. The threat is of course on the RAN infrastructure business, and while the company has been moving to cloud ran, virtualizing and containerizing its software, the company still in majority ships vertical, fully integrated base stations.

When it comes to Open RAN, the company seems to get closer to embracing the concept, with conditions.

Ericsson has been advocating that the current low layer split 7.2.x is not suitable for massive MIMO and high capacity 5G systems and is proposing an alternative fronthaul interface to the O-RAN alliance. Cynics might say this is a delaying tactic, as other vendors have deployed massive MIMO on 7.2.x in the field, but as market leader, Ericsson has some strong datasets to bring to the conversation and contest the suitability of the current implementation. Ericsson is now publicly endorsing Open RAN architecture and, having virtualized its RAN software, will offer a complete solution, with O-RU, vDU,.vCU, SMO and Non-RT RIC . The fronthaul interface will rely on the recently proposed fronthaul and the midhaul will remain the F1 3GPP interface.

On the opportunity front, while most Ericsson systems usually ship with an Element Management System (EMS), which can be integrated into a Management and Orchestration (MANO) or Service Management and Orchestration (SMO) framework, the company has not entirely dominated this market segment and Open RAN, in the form of SMO and Non-RT RIC represent an opportunity to grow in the strategic intelligence and orchestration sector.

Ericsson is using the market leader playbook to its advantage. First rejecting Open RAN as immature, not performing and not secure, then admitting that it can provide some benefits in specific conditions, and now embracing it with very definite caveats.

The front haul interface proposal by the company seems self-serving, as no other vendor has really raised the same concerns in terms of performance and indeed commercial implementations have been observed with performance profiles comparable to traditional vendors.

The Non-RT RIC and rApp market positioning is astute and allows Ericsson simultaneously to claim support for Open RAN and to attack the SMO market space with a convincing offer. The implementation is solid and reflects Ericsson’s high industrialization and quality practice. It will doubtless offer a mature implementation of SMO / Non-RT RIC and rApps and provide a useful set of capabilities for operators who want to continue using Ericsson RAN with a higher instrumentation level. The slow progress for 3rd party integration both from a RIC and Apps perspective is worrisome and could be either the product of the company quality and administrative processes or a strategy to keep the solution fairly closed and Ericsson-centric, barring a few token 3rd party integrations.


Thursday, September 14, 2023

O-RAN alliance rApps and xApps typology

 An extract from the Open RAN RIC and Apps report and workshop.

1.    O-RAN defined rApps and xApps

1.1.        Traffic steering rApp and xApp

Traditional RAN provides few mechanisms to load balance and force traffic on specific radio paths. Most deployments see overlaps of coverage between different cells in the same spectrum, as well as other spectrum layered in, allowing performance, coverage, density and latency scenarios to coexist. The methods by which a UE is connected to a specific cell and a specific channel are mostly static, based on location of the UE, signal strength, service profile and the parameters to handover a connection from one cell to another, or within the same cell from one bearer to another or from one sector to another. The implementation is vendor specific and statically configured.



Figure 9: Overlapping cells and traffic steering

Non-RT RIC and rApps offer the possibility to change these handover and assignments programmatically and dynamically, taking advantage of policies that can be varied (power optimization, quality optimization, performance or coverage optimization…) and that can change over time. Additionally, the use of AI/ML technology can provide predictive input capability for the selection or creation of policies allowing a preferable outcome.

The traffic steering rApp is a means to design and select traffic profile policies and to dynamically allow the operator to instantiate these policies, either per cell, per sector, per bearer or even per UE or per type of service. The SMO or the Non-RT RIC collect RAN data on traffic, bearer, cell, load, etc. from the E2 nodes and instruct the Near-RT RIC to enforce a set of policies through the established parameters.

1.2.       QoE rApp and xApp

This rApp is assuming that specific services such as AR/VR will require different QoE parameters that will need to be adapted in a semi dynamic fashion. It proposes the use of AI/ML for prediction of traffic load and QoE conditions to optimize the traffic profiles.

UE and network performance data transit from the RAN to the SMO layer over the O1 interface, QoE AI/ML models are trained, process the data and infer the state and predict its evolution over time, the rApp transmits QoE policy directives to the Near-RT RIC via the Non-RT RIC.

1.3.       QoS based resource optimization rApp and xApp

QoS based resource optimization rApp is an implementation of network slicing optimization for the RAN. Specifically, it enables the Non-RT RIC to guide the Near-RT RIC in the allocation of Physical Resource Blocks to a specific slice or sub slice, should the Slice Level Specification not be satisfied by the static slice provisioning.

1.4.       Context-based dynamic handover management for V2X rApp and xApp

Since mobile networks have been designed for mobile but relatively low velocity users, the provision of high speed, reliable mobile service along highways requires specific designs and configurations. As vehicles become increasingly connected to the mobile network and might rely on network infrastructure for a variety of uses, Vehicle to infrastructure (V2X) use cases are starting to appear primarily as research and science projects. In this case, the App is supposed to use AI/ML models to predict whether a UE is part of a V2X category and its trajectory in order to facilitate cell handover along its path.

1.5.       RAN Slice Assurance rApp and xApp

3GPP has defined the concept of creating a connectivity product with specific attributes (throughput, reliability, latency, energy consumption) applicable to specific devices, geographies, enterprises… as slices. In an O-RAN context, the Non-RT RIC and Near-RT RIC can provide optimization strategies for network slicing. In both cases, the elements can monitor the performance of the slice and perform large or small interval adjustments to stay close the slice’s Service Level Agreement (SLA) targets.

Generally speaking, these apps facilitate the allocation of resource according to slice requirements and their dynamic optimization over time.

1.6.       Network Slice Instance Resource Optimization rApp

The NSSI rApp aims to use AI/ML to model traffic patterns of a cell through historical data analysis. The model is then used to predict network load and conditions for specific slices and to dynamically and proactively adjust resource allocation per slice.

1.7.       Massive MIMO Optimization rApps and xApps

Massive MIMO (mMIMO) is a key technology to increase performance in 5G. It uses complex algorithms to create signal beams which minimize signal interference and provide narrow transmission channels. This technology, called beamforming can be configured to provide variations in the vertical and horizontal axis, azimuth and elevation resulting in beams of different shapes and performance profiles. Beamforming and massive MIMO are a characteristic of the Radio Unit, where the DU provides the necessary data for the configuration and direction of the beams.

In many cases, when separate cells overlap a given geography, for coverage or density with either multiple macro cells or macro and small cells mix, the mMIMO beams are usually configured statically, manually based on the cell situation. As traffic patterns, urban environment and interference / reflection, change, it is not rare that the configured beams lose efficiency over time.

In this instance, the rApp collects statistical and measurement data of the RAN to inform a predictive model of traffic patterns. This model, in turn informs a grid of beams that can be applied to a given situation. This grid of beams is transmitted to the DU through the Near-RT RIC and a corresponding xApp, responsible for assigning the specific PRB and beam parameters to the RU. A variant of this implementation does not require grid of beams or AI/ML, bit a list of statically configured beams that can be selected based on specific threshold or RAN measurements.

Additional apps leveraging unique MIMO features such as downlink transmit power, Multiple User MIMO and Single User MIMO allow, by reading UE performance to adjust the transmit power or the beam parameters to improve the user experience or the overall spectral efficiency.

1.8.       Network energy saving rApps and xApps

These apps are a collection of methods to optimize power consumption in the open RAN domain.

    Carrier and cell switch off/on rApp:

A simple mechanism to identify within a cell the capacity needed and whether it is possible to reduce the power consumption by switching off frequency layers (carriers) or the entire cell, should sufficient coverage / capacity exist with other adjoining overlapping cells. AI/ML model on the Non- RT RIC might assist in the selection and decision, as well as provide a predictive model. The prediction in this case is key, as one cannot simply switch off a carrier or a cell without gracefully hand over its traffic to an adjoining carrier or cell before to reduce quality of experience negative impact.

    RF Channel reconfiguration rApp:

mMIMO is achieved by the combination of radiating elements to form the beams. A mMIMO antenna array 64 64 transceivers and receivers (64T64R) can be configured to reduce its configuration to 32, 16 or 8 T/R for instance, resulting in a linear power reduction. An AI/ML model can be used to determine the optimal antenna configuration based on immediate and predictive traffic patterns.

Monday, September 11, 2023

Why was virtualized RAN started?

 


Traditional RAN equipment vendors have developed and deployed RAN solutions in every band, in every generation, for any network configuration. This doesn’t happen without an extremely well industrialized process, with rigid interfaces and change management. This cumulative intellectual property, together with the capacity to deploy in a few months a new generation of network is what operators have been valuing until now.

The creation of a new Radio platform is a large investment, in the range of tens of millions, with a development timeframe extending from 18 to 30 months. Because it is a complex solution, underpinned with large hardware dependencies, it requires very good planning and development management only available to highly industrialized companies. The development of subsequent radios on the same platform might take less time and costs, but essentially the economics remain the same, you need at least 10,000 units of firm order, for a radio to be economically viable.

It is expensive because it works. As long as you don’t mind being essentially dependent of your vendor for all professional services associated with their product, they can guarantee it will work. This part is key, because taking sole responsibility for deployment, operation and maintenance of a radio system is a huge undertaking. Essentially, the traditional vendors are selling together with equipment and services an insurance policy in the form of onerous Service Level Agreements (SLA), willing to undertake penalties and damages in case of failure.

Unfortunately, most network operators find themselves in a situation where, with the reduction of their Average Revenue per User (ARPU) combined with the perpetual traffic growth and appetite for video streaming, they see their costs steadily increase and their margins compressed. Connectivity seems increasingly like a commodity from a customer standpoint, with easy availability and low friction to change provider, whereas it comes at an increasing cost for its operators.

Changing the cost structure of buying capacity is a must for all networks operators to survive, and it touches all aspects of their network.

Fortunately, there are a few markets that have seen similar needs in the past and solutions have emerged. Particularly, the internet giants, video streaming services and social networks, have had to face explosive growth of traffic, with essentially flat pricing or advertising-based revenue models which forced them to reimagine how to scale their network capacity.

From there have emerged technologies such as network virtualization, Software Defined Networking (SDN) and their higher levels of abstraction leading to the cloud computing market as we know it.

Applying these methods and technologies to the RAN market seemed like a sensible and effective way to change its cost structure.

Thursday, August 10, 2023

What RICs and Apps developers need to succeed

 

We spoke a bit about my perspective on the Non and Near-Real Time RIC likely trajectories and what value rApps and xApps have for operators and the industry. As I conclude the production of my report and workshop on Open RAN RICs and Apps, after many discussions with the leaders in that field, I have come with a few conclusions.

There are many parameters for a company to be successful in telecoms, and in the RIC and Apps area, there are at least three key skill sets that are necessary to make it.

Artificial Intelligence is a popular term many in the industry use as a shorthand for their excel macros linear projection and forecast mastery. Data literacy is crucial here, as big data / machine learning / deep learning / artificial intelligence terms are bandied around for marketing purposes. I am not an expert in the matter, but I have a strong feeling that the use cases for algorithmic fall into a few categories. I will try to expose them in my terms, apologies in advance to the specialists as the explanation will be basic and profane.

  • Anomaly / pattern detection provide a useful alarming system if the system's behavior has a sufficiently long time series and the variance is somewhat reduced or predictable. This does not require more than data knowledge, it is a math problem.
  • Optimization / correction should allow, provided the anomaly / pattern detection is accurate to pinpoint specific actions that would allow a specific outcome. This where RAN knowledge is necessary. It is crucial to be able to identify from the inputs whether the output is accurate and to which element it corresponds. Again, a long time series of corrections / optimizations and their impact / deviation is necessary for the model to be efficient.
  • Prediction / automation is the trickiest part. Ideally, given enough knowledge of the system's patterns, variances and deviations, one can predict with some accuracy its behavior over time in steady state and when anomalies occur and take a preemptive /corrective action. Drawn to its logical conclusion, full automation and autonomy would be possible. This is where most companies overpromise in my mind. The system here is a network. Not only is it vast and composed of millions of elements (after all that is just a computing issue), it is also always changing. Which means that there no steady state and that the time series is a collection of dynamically changing patterns. Achieving full automation under these conditions seems impossible. Therefore, it is necessary to reframe expectations, especially in a multi vendor environment and to settle for pockets of automation, with AI/ML augmented limited automation.

Platform and developer ecosystem management is also extremely important in the RIC and Apps segment if one wants to deploy multi vendor solutions. The dream of being able to instantiate Apps from different vendors and orchestrate them harmoniously is impossible without a rich platform, with many platform services attributes (lifecycle management, APIs, SDK, Data / messaging bus, orchestration...). This does not necessarily require much RAN knowledge and this why we are seeing many new entrants in this field.

The last, but foremost, in my mind, is the RAN knowledge. The companies developing RAN Intelligent Controllers and apps need to have deep understanding of the RAN, its workings and evolution. Deep knowledge may probably not necessary for the most pedestrian use cases around observability and representation of the health and performance of the system or the network, but any App that would expect a retro feedback and to send instruction to the lower elements of the architecture needs understanding of not only of the interfaces, protocols and elements but also their function, interworking and capabilities. If the concept of RICs and Apps is to be successful, several Apps will need to be able to run simultaneously and ideally from different vendors. Understanding the real-life consequences of an energy efficiency App and its impact on quality of service, quality of experience, signaling is key in absolute. It becomes even more crucial to understand how Apps can coexist and simultaneously, or by priority implement power efficiency, spectrum optimization and handover optimization for instance. The intricacies of beamforming, beam weight, beam steering in mMIMO systems, together with carrier aggregation and dynamic spectrum sharing mandate a near / real time control capability. The balance is delicate and it is unlikely that scheduler priorities could conceivably be affected by an rApp that has little understanding of these problematics. You don't drive a formula one car while messing about the gear settings.

If you want to know how I rank the market leaders in each of these categories, including Accelleran, Aira technologies, Airhop, Airspan, Capgemini, Cohere technologies, Ericsson, I-S Wireless, Fujitsu, Juniper, Mavenir, Nokia, Northeastern University, NTT DOCOMO, Parallel Wireless, Radisys, Rakuten, Rimedo Labs, Samsung, VIAVI, vmware and others, you'll have to read my report or register for my workshop.