Wednesday, January 31, 2024

The AI-native telco network

AI, and more particularly generative AI has been a big buzzword since the public launch of GTP. The promises of AI to automate and operate complex tasks and systems are pervading every industry and telecom is not impervious to it. 

Most telecom equipment vendors have started incorporating AI or brushed up their big data / analytics skills at least in their marketing positioning. 
We have even seen a few market acquisitions where AI / automation has been an important part of the investment narrative / thesis (HPE / Juniper Networks)
Concurrently, many startups are being founded or are pivoting towards AI /ML to take advantage of this investment cycle. 

In telecoms, there has been use for big data, machine learning, deep learning and other similar methods for a long time. I was leading such a project at Telefonica on 2016, using advanced prediction algorithms to detect alarming patterns, infer root cause analysis and suggest automated resolutions. 

While generative AI is somewhat new, the use of data to analyze, represent, predict network conditions is well known. 

AI in telecoms is starting to show some promises, particularly when it comes to network planning, operation, spectrum optimization, traffic prediction, and power efficiency. It comes with a lot of preconditions that are often glossed over by vendors and operators alike. 

Like all data dependent technologies, one has first to have the ability to collect, normalize, sanitize and clean data before storing it for useful analysis. In an environment as idiosyncratic as a telecoms network, this is not an easy task. Not only networks are composed of a mix of appliances, virtual machines and cloud native functions, they have had successive technological generations deployed along each other, with different data schema, protocols, interface, repository which makes the extraction arduous. After that step, normalization is necessary to ensure that the data is represented the same way, with the same attributes, headers, … so that it can be exploited. Most vendors have their proprietary data schemes or “augment” standard with “enhanced” headers and metadata. In many case the data need to be translated in a format that can be normalized for ingestion. The cleaning and sanitizing is necessary to ensure that redundant or outlying data points do not overweight the data set. As always, “garbage in / garbage out” is an important concept to keep in mind. 

These difficult steps are unfortunately not the only prerequisite for an AI native network. The part that is often overlooked is that the network has to be somewhat cloud native to take full advantage of AI. The automation in telecoms networks requires interfaces and APIs to be defined, open and available at every layer, from access to transport to the core, from the physical to the virtual and cloud native infrastructure. NFV, SDN, network disaggregation, open optical, open RAN, service based architecture, … are some of the components that can enable a network to take full advantage of AI. 
Cloud networks and data centers seem to be the first to adopt AI, both for the hosting of the voracious GPUs necessary to train the Large Language Models and for the resale / enablement of AI oriented companies. 

For that reason, the more recent greenfield networks that have been recently deployed with the state of the art cloud native technologies should be the prime candidates for AI / ML based network planning, deployment and optimization. The amount of work necessary for the integration and deployment of AI native functions is objectively much lower than their incumbent competitors. 
We haven’t really seen sufficient evidence that this level of cloud "nativeness" enables mass optimization and automation with AI/ML that would result in massive cost savings in at least OPEX, creating a unfair competitive advantage against their incumbents. 

As the industry approaches Mobile World Congress 2024, with companies poised to showcase their AI capabilities, it is crucial to remain cognizant of the necessary prerequisites for these technologies to deliver tangible benefits. Understanding the time and effort required for networks to truly benefit from AI is essential in assessing the realistic impact of these advancements in the telecom sector.

Friday, January 26, 2024

Product Marketing as a Strategic Tool for Telco Vendors

Those who know me for a long time know that I am a Product Manager by trade. This is how I started my career and little by little, from products, to product lines, to solutions I have come to manage and direct business lines worth several hundred of millions of dollars. Along this path, I have become also a manager and team lead, then moved onto roles with increasing strategic content, from reselling, OEM, deals to buy and sell side acquisitions and integrations.

Throughout this time, I have noticed the increased significance of Product Marketing in the telecoms vendors environment. In a market that has seen (and is still seeing) much concentration, with long sales cycles and risk-adverse customers, being able to intelligently and simply state a product's differentiating factor becomes paramount.

Too often, large companies rely on brand equity and marketing communication to support sales. In a noisy market, large companies have many priorities, which end up diluting the brand promise and provide vague and disconnected messages across somewhat misaligned product and services.

By contrast, start ups and small companies often have much smaller range of products and services, but having less budget, focus in may case on technology and technical illustrations rather than exalting the benefits and value of their offering.

My experience has underscored the pivotal role of product marketing in shaping a company's valuation, whether for fundraising or acquisition purposes. Yet, despite its proven impact, many still regard it as a peripheral activity. The challenge lies in crafting a narrative that resonates—a narrative that not only embodies the company's strategic vision but also encapsulates market trends, technological evolutions, and competitive dynamics. It's about striking a delicate balance, weaving together product capabilities, customer pain points, and the distinct value proposition in a narrative that is both compelling and credible.

Many companies will have marketing communication departments working on product marketing, which often results in either vague and bland positioning or in disconnects between the claims and the true capabilities of the products. This can be very damaging for a company's image when its market claims do not reflect accurately the capabilities of the product or the evolution of the technology. 

Other companies have the product marketing as part of the product management function, whereas the messaging and positioning might be technically accurate, but lack competitive and market awareness to resonate and find a differentiating position that will maximize the value of the offering.

As the telecoms vendors' sector braces for heightened competition and market contraction, with established players fiercely guarding protecting their market share against aggressive newcomers, the role of product marketing becomes increasingly critical. It's an art form that merits recognition, demanding heightened attention and strategic investment. For those poised to navigate this complex terrain, embracing product marketing is not just an option; it's an imperative for sustained relevance and success in challenging market conditions. 


Monday, January 15, 2024

Gen AI and LLM: Edging the Latency Bet

The growth of generative AI and Large Language Models has restarted a fundamental question about the value of a millisecond of latency. When I was at Telefonica, and later, consulting at Bell Canada, one of the projects I was looking after was the development, business case, deployment, use cases and operation of Edge Computing infrastructure in a telecom network.

Since I have been developing and deploying Edge Computing platforms since 2016,  I have had a head start in figuring out the fundamental questions surrounding the technlogy, the business case and the commercial strategy.

Where is the edge?


The first question one has to tackle is where is the edge. It is an interesting question because it depends on your perspective. the edge is a different location if you are an hyperscaler, a telco network operator or a developer. It can also vary over time and geography. In any case, the edge is a place where one can position compute closer than the current Public or Private Cloud Infrastructure in order to derive additional benefits. It can vary from a regional, to a metro to a mini data center, all the way to on premise or on device cloud compute capability. Each has its distinct cost, limitation and benefit.

What are the benefits of Edge Computing?


The second question, or maybe the first one, from a pragmatic and commercial standpoint is why do we need edge computing? What are the benefits?

While these will vary depending on the consumer of the compute capability, and where the compute function is located, we can derive general benefits that will be indexed to the location. Among these, we can list data sovereignty, increased privacy and security and reduced latency, enabling cheaper (dumber) devices, the creation of new media types and new models and services.

What are the use cases of Edge Computing?

I have deployed and researched over 50 use cases of edge computing, from the banal storage, caching and streaming at the edge to the sophisticated TV production or the specialized Open RAN or telco User Plane Function or machine vision use cases for industrial and agriculture application.

What is the value of 1ms?

Sooner or later, after testing various use cases, locations and architectures, the fundamental question emerges. What is the value of 1ms? It is a question heavy with assumptions and correlations. In absolute, we would all like a connectivity that is faster, more resilient, more power efficient, economical and with lower latency. The factors that condition latency are the number of hops or devices the connection has to go through between the device and the origin point where the content or code is stored, transformed, computed and the distance between the device and the compute point. To radically reduce latency, you have to reduce the number of hops or reduce the distance, Edge Computing achieves both.

But obviously, there is a cost. The latency will be proportional to the distance,  so the fundamental question becomes what is the optimal placement of a compute resource, for which use case? Computing is a continuum and some applications and workload are not latency or privacy or sovereignty sensitive and can run on an indiscriminate public cloud, while others necessitate the compute to be in the same country, region or city. Others even require a closer proximity. The difference is staggering in terms of investments between a handful of centralized data centers and several hundreds / thousands? of micro data center.

What about AI and LLM?

Until now, these questions where somewhat theoretical and were answered organically by hyperscalers and operators based on their respective view of the market evolution. Generative AI and its extraordinary appetite for compute is rapidly changing this market space. Not only Gen AI accounts for a sizable and growing portion of all cloud compute capacity, the question of latency now is getting to the fore. Gen AI relies on Large Language Models that require large amount of storage and compute, to be able to to be trained to recognize patterns. The larger the LLM, the more compute capacity, the better the pattern recognition. Pattern recognition leads to generation of similar results based on incomplete prompts / question / data set, that is Gen AI. Where does latency come in? Part of the compute to generate a response to a question is in the inference business. While the data set resides in a large compute data center in a centralized cloud, inference is closer to the user, at the edge, where it parses the request and attempts to feed the trained model with unlabeled input to receive a prediction of the answer based on the trained model. The faster the inference is, the more responses the model can provide, which means that low latency, is a competitive advantage for a gen AI service.

As we have seen there is a relatively small number of options to reduce latency and they all involve large investment. The question then becomes: what s the value of a millisecond? Is 100 or 10 sufficient? When it comes to high frequency trading, 1ms is extremely valuable (billions of dollars). When it comes to online gaming, low latency is not as valuable as controlled and uniform latency across the players. When it comes to video streaming, latency is generally not an issue, but when it comes to machine vision for sorting fruits on a mechanical conveyor belt running at 10km/h, it is very important.

I have researched and deployed many edge computing use cases and derived a fairly comprehensive workshop on the technological, commercial and strategic aspects of Edge computing and Data Center investment strategies.

If you would like to know more, please get in touch.

Tuesday, January 9, 2024

HPE acquires Juniper Networks


On January 8, the first rumors started to emerge that HPE was entering final discussions to acquire Juniper Networks for $13b. By January 9th, HPE announced that they have entered into a definitive agreement for the acquisition.

Juniper Networks, known for its high-performance networking equipment, has been a significant player in the networking and telecommunications sector. It specializes in routers, switches, network management software, network security products, and software-defined networking technology. HPE, on the other hand, has a broad portfolio that includes servers, storage, networking, consulting, and support services.

 The acquisition of Juniper Networks by HPE could be a strategic move to strengthen HPE’s position in the networking and telecommunications sector, diversify its product offerings, and enhance its ability to compete with other major players in the market such as Cisco.

Most analysis I have read so far have pointed out AIOps and Mist AI as the core thesis for acquisition, enabling HPE to bridge the gap between equipment vendor and solution vendor, particularly in the Telco space.

While this is certainly an aspect of the value that Juniper Networks would provide to HPE, I believe that the latest progress from Juniper Networks in Telco beyond transport, particularly as an early leader in the emerging field of RAN Intelligence and SMO (Service Management and Orchestration) was a key catalyst in HPE's interest.

After all, Juniper Networks has been a networking specialist and leader for a long time, from SDN, SD WAN, Optical to data center, wired and wireless networks. While the company has been making great progress there, gradually virtualizing  and cloudifying its routers, firewalls and gateway functions, no revolutionary technology has emerged there until the application of Machine Learning and predictive algorithms to the planning, configuration, deployment and management of transport networks.

What is new as well, is Juniper Networks' efforts to penetrate the telco functions domains, beyond transport. The key area ready for disruption has been the Radio Access Network (RAN), specifically with Open RAN becoming an increasingly relevant industry trend to pervade networks architecture, culminating with AT&T's selection of Ericsson last month to deploy Open RAN for $14B.

Open RAN offers disaggregation of the RAN, with potential multivendor implementations, benefitting from open standard interfaces. Juniper Networks, not a traditional RAN vendor, has been quick to capitalize on its AIOps expertise by jumping on the RAN Intelligence marketspace, creating one of the most advanced RAN Intelligent Controller (RIC) in the market and aggressively integrating with as many reputable RAN vendors as possible. This endeavor, opening up the multi billion $ RAN and SMO markets is pure software and heavily biased towards AI/ML for automation and prediction.

HPE has been heavily investing in the telco space of late, becoming a preferred supplier of Telco CaaS and Cloud Native Functions (CNF) physical infrastructures. What HPE has not been able to do, is creating software or becoming a credible solutions provider / integrator. The acquisition of Juniper Networks could help solve this. Just like Broadcom's acquisition of VMWare (another early RAN Intelligence leader), or Cisco's acquisition of Accedian, hardware vendors yearn to go up the value chain by acquiring software and automation vendors, giving them the capacity to provide integrated end to end solutions and to achieve synergies and economy of scale through vertical integration.

The playbook is not new, but this potential acquisition could signal a consolidation trend in the telecommunications and networking industry, suggesting a more competitive landscape with fewer but larger players. This could have far-reaching implications for customers, suppliers, and competitors alike.