Showing posts with label Edge Computing. Show all posts
Showing posts with label Edge Computing. Show all posts

Wednesday, April 16, 2025

Is AI-RAN the future of telco?

 AI-RAN has emerged recently as an interesting evolution of telecoms networks. The Radio Access Network (RAN) has been undergoing a transformation over the last 10 years, from a vertical, proprietary highly concentrated market segment to a disaggregated, virtualized, cloud native ecosystem.

Product of the maturation of a number of technologies, including telco cloudification, RAN virtualization and open RAN and lately AI/ML, AI-RAN has been positioned as a means to disaggregate and open up further the RAN infrastructure.

This latest development has to be examined from an economic standpoint. RAN accounts roughly for 80% of a telco deployment (excluding licenses, real estate...) costs. 80% of these costs are roughly attributable to the radios themselves and their electronics. The market is dominated by few vendors and telecom operators are exposed to substantial supply chain risks and reduced purchasing power.

The AI RAN alliance was created in 2024 to accelerate its adoption. It is led by network operators (T-Mobile, Softbank, Boost Mobile, KT, LG Uplus, SK Telecom...) telecom and IT vendors (Nvidia, arm, Nokia, Ericsson Samsung, Microsoft, Amdocs, Mavenir, Pure Storage, Fujitsu, Dell, HPE, Kyocera, NEC, Qualcomm, Red Hat, Supermicro, Toyota...).

If you are familiar with this blog, you already know of the evolution from RAN to cloud RAN and Open RAN, and more recently the forays into RAN intelligence with the early implementations of near and non real time RAN Intelligence Controller (RIC)

AI-RAN goes one step further in proposing that the specialized electronics and software traditionally embedded in RAN radios be deployed on high compute, GPU based commercial off the shelf servers and that these GPUs manage the complex RAN computation (beamforming management, spectrum and power optimization, waveform management...) and double as a general high compute environment for AI/ML applications that would benefit from deployment in the RAN (video surveillance, scene, object, biometrics recognition, augmented / virtual reality, real time digital twins...). It is very similar to the edge computing early market space.

The potential success of AI-RAN relies on a number of techno / economic assumptions:

For Operators:

  • It is desirable to be able to deploy RAN management, analytics, optimization, prediction, automation algorithms in a multivendor environment that will provide deterministic, programmable results.
  • Network operators will be able and willing to actively configure, manage and tune RAN parameters.
  • Deployment of AI-RAN infrastructure will be profitable (combination of compute costs being offloaded by cost reduction by optimization and new services opportunities).
  • AI-RAN power consumption, density, capacity, performance will exceed traditional architectures in time.
  • Network Operator will be able to accurately predict demand and deploy infrastructure in time and in the right locations to capture it.
  • Network Operators will be able to budget the CAPEX / OPEX associated with this investment before revenue materialization.
  • An ecosystem of vendors will develop that will reduce supply chain risks

For vendors:

  • RAN vendors will open their infrastructure and permit third parties to deploy AI applications.
  • RAN vendors will let operators and third parties program the RAN infrastructure.
  • There is sufficient market traction to productize AI-RAN.
  • The rate of development of AI and GPU technologies will outpace traditional architecture.
  • The cost of roadmap disruption and increased competition will be outweighed by the new revenues or is the cost to survive.
  • AI-RAN represents an opportunity for new vendors to emerge and focus on very specific aspects of the market demand without having to develop full stack solutions.

For customers:

  • There will be a market and demand for AI as a Service whereas enterprises and verticals will want to use a telco infrastructure that will provide unique computing and connectivity benefits over on-premise or public cloud solutions.
  • There are AI/ML services that (will) necessitate high performance computing environments, with guaranteed, programmable connectivity with a cost profile that is better mutualized through a multi tenant environment
  • Telcom operators are the best positioned to understand and satisfy the needs of this market
  • Security, privacy, residency, performance, reliability will be at least equivalent to on premise or cloud with a cost / performance benefit. 
As the market develops, new assumptions are added every day. The AI-RAN alliance has defined three general groups to create the framework to validate them: 
  1. AI for RAN: AI to improve RAN performance. This group focuses on how to program and optimize the RAN with AI. The expectations is that this work will drastically reduce the cost of RAN, while allowing sophisticated spectrum, radio waves and traffic manipulations for specific use cases.
  2. AI and RAN: Architecture to run AI and RAN on the same infrastructure. This group must find the multitenant architecture allowing the system to develop into a platform able to host a variety of AI workloads concurrently with the RAN. 
  3. AI on RAN: AI applications to run on RAN infrastructure. This is the most ambitious and speculative group, defining the requirements on the RAN to support the AI workloads that will be defined
As for Telco Edge Computing, and RAN intelligence, while the technological challenges appear formidable, the commercial and strategic implications are likely to dictate whether AI RAN will succeed. Telecom operators are pushing for its implementation, to increase control over spending, and user experience of the RAN, while possibly developing new revenue with the diffusion of AIaaS. Traditional RAN vendors see the nascent technology as further threat to their capacity to sell programmable networks as black boxes, configured, sold and operated by them. New vendors see the opportunity to step into the RAN market and carve out market share at the expense of legacy vendors.

Monday, March 10, 2025

MWC 25 thoughts

 Back from Mobile World Congress 2025!

I am so thankful I get to meet my friends, clients, ex colleagues year after year and to witness how our industry is moving first hand.

2025 was probably my 23rd congress or so and I always find it invaluable for many reasons. 



Innovation from the East

What stood up for me this year was how much innovation is coming from Asian companies, while most Western companies seem to be focusing on cost control. 

The feeling was pervasive throughout the show and the GLOMO awards winners showed Huawei, ZTE, China Mobile, SK, Singtel… investing in discovering and solving problems that many in Western markets dismiss as futuristic or outside their comfort zone. In mature markets, where price attrition is the rule, differentiation is key.

On a related topic, being Canadian, I can’t help thinking that many companies and regulators who looked at the banning of some Chinese vendors from their markets due to security preoccupations are now finding themselves in the situation to evaluate whether American suppliers do not also represent a risk in the future. 

Without delving into politics, I saw and heard many initiatives to enhance security, privacy, sovereignty, either in the cloud or the supply chain categories. 

Open telco APIs

Open APIs and the progress of telco networks APIs is encouraging, but while it is a good idea, it feels late and lacking in comparison with webscalers tooling and offering to discover, consume, and manage network functions on demand. Much work remains to be done in my opinion to enhance the aaS portion of the offering, particularly if slicing APIs are to be offered. 

Open RAN & RIC

Open RAN threat has successfully accelerated cloud and virtualized RAN adoption. Samsung started the trend and Ericsson’s deployment at AT&T has crystalized the mMIMo +CU+DU+non RT RIC from a main vendor and small cells + rApps from others as a viable option. Vodafone’s RAN refresh should see maybe more players into the mix as Mavenir and Nokia are struggling to gain meaningful market share. 

The Juniper / HPE acquisition drama, together with the Broadcom / VMware commercial strategy seem to have killed the idea of an independent Non RT RIC vendor. Near RT RIC, remains in my mind a flawed proposition as host of 3rd party xApps, and as an expensive gadget for anything else than narrow use cases. 

AI

AI of course, was the belle of the ball at MWC. Everyone had a twist, a demo, a model, an agent but few were able to demonstrate utility beyond automated time series regression as predictions or LLM based natural language processing as nauseam…

Some were convincingly starting to show Small Models that were tailored to their technology, topology and network with promising results. It is still early but it feels that this is where the opportunity lies. The creation and curation of a dataset that can be used to plan, manage, maintain, predict the state of one’s network, with bespoke algorithms seems more desirable than the wholesale vague large and poorly trained models. 

Telco Cloud and Edge computing is having a bit of a moment with AI and GPU aaS strategies being enacted.

All in all, many are trying to develop an AI strategy, and while we are still far from the AI-Native Telco Network, there is some progress and some interesting ventures amidst the noise.

Thursday, August 8, 2024

The journey to automated and autonomous networks

 

The TM Forum has been instrumental in defining the journey towards automation and autonomous telco networks. 

As telco revenues from consumers continue to decline and the 5G promise to create connectivity products that enterprises, governments and large organizations will be able to discover, program and consume remains elusive, telecom operators are under tremendous pressure to maintain profitability.

The network evolution started with Software Defined Networks, Network Functions Virtualization and more recently Cloud Native evolution aims to deliver network programmability for the creation of innovative on-demand connectivity services. Many of these services require deterministic connectivity parameters in terms of availability, bandwidth, latency, which necessitate end to end cloud native fabric and separation of control and data plane. A centralized control of the cloud native functions allow to abstract resource and allocate them on demand as topology and demand evolve.

A benefit of a cloud native network is that, as software becomes more open and standardized in a multi vendor environment, many tasks that were either manual or relied on proprietary interfaces can now be automated at scale. As layers of software expose interfaces and APIs that can be discovered and managed by sophisticated orchestration systems, the network can evolve from manual, to assisted, to automated, to autonomous functions.


TM Forum defines 5 evolution stages from full manual operation to full autonomous networks.

  • Condition 0 - Manual operation and maintenance: The system delivers assisted monitoring capabilities, but all dynamic tasks must be 0 executed manually
  • Step 1 - Assisted operations and maintenance: The system executes a specific, repetitive subtask based on pre-configuration, which can be recorded online and traced, in order to increase execution efficiency.
  • Step 2: - Partial autonomous network: The system enables closed-loop operations and maintenance for specific units under certain external environments via statically configured rules.
  • Step 3 - Conditional autonomous network: The system senses real-time environmental changes and in certain network domains will optimize and adjust itself to the external environment to enable, closed-loop management via dynamically programmable policies.
  • Step 4 - Highly autonomous network: In a more complicated cross-domain environment, the system enables decision-making based on predictive analysis or active closed-loop management of service-driven and customer experience-driven networks via AI modeling and continuous learning.
  • Step 5 - Fully autonomous network: The system has closed-loop automation capabilities across multiple services, multiple domains (including partners’ domains) and the entire lifecycle via cognitive self-adaptation.
After describing the framework and conditions for the first 3 steps, the TM Forum has recently published a white paper describing the Level 4 industry blueprints.

The stated goals of level 4 are to enable the creation and roll out of new services within 1 week with deterministic SLAs and the delivery of Network as a service. Furthermore, this level should allow fewer personnel to manage the network (1000's of person-year) while reducing energy consumption and improving service availability.

These are certainly very ambitious objectives. The paper goes on to describe "high value scenarios" to guide level 4 development. This is where we start to see cognitive dissonance creeping in between the stated objectives and the methodology.  After all, much of what is described here exists today in cloud and enterprise environments and I wonder whether Telco is once again reinventing the wheel in trying to adapt / modify existing concepts and technologies that are already successful in other environments.

First, the creation of deterministic connectivity is not (only) the product of automation. Telco networks, in particular mobile networks are composed of a daisy chain of network elements that see customer traffic, signaling, data repository, look up, authentication, authorization, accounting, policy management functions being coordinated. On the mobile front, the signal effectiveness varies over time, as weather, power, demand, interferences, devices... impact the effective transmission. Furthermore, the load on the base station, the backhaul, the core network and the  internet peering point also vary over time and have an impact on its overall capacity. As you understand, creating a connectivity product with deterministic speed, latency capacity to enact Network as a Service requires a systemic approach. In a multi vendor environment, the RAN, the transport, the core must be virtualized, relying on solid fiber connectivity as much as possible to enable the capacity and speed. The low latency requires multiple computing points, all the way to the edge or on premise. The deterministic performance requires not only virtualization and orchestration of the RAN, but also the PON fiber and end to end slicing support and orchestration. This is something that I led at Telefonica with an open compute edge computing platform, a virtualized (XGS) PON on a ONF ONOS VOLTHA architecture with an open virtualized RAN. This was not automated yet, as most of these elements were advanced prototype at that stage, but the automation is the "easy" part once you have assembled the elements and operated them manually for enough time. The point here is that deterministic network performances is attainable but still a far objective for most operators and it is a necessary condition to enact NaaS, before even automation and autonomous networks.

Second, the high value scenarios described in the paper are all network-related. Ranging from network troubleshooting, to optimization and service assurance, these are all worthy objectives, but still do not feel "high value" in terms of creation of new services. While it is natural that automation first focuses on cost reduction for roll out, operation, maintenance, healing of network, one would have expected more ambitious "new services" description.

All in all, the vision is ambitious, but there is still much work to do in fleshing out the details and linking the promised benefits to concrete services beyond network optimization.

Monday, January 15, 2024

Gen AI and LLM: Edging the Latency Bet

The growth of generative AI and Large Language Models has restarted a fundamental question about the value of a millisecond of latency. When I was at Telefonica, and later, consulting at Bell Canada, one of the projects I was looking after was the development, business case, deployment, use cases and operation of Edge Computing infrastructure in a telecom network.

Since I have been developing and deploying Edge Computing platforms since 2016,  I have had a head start in figuring out the fundamental questions surrounding the technlogy, the business case and the commercial strategy.

Where is the edge?


The first question one has to tackle is where is the edge. It is an interesting question because it depends on your perspective. the edge is a different location if you are an hyperscaler, a telco network operator or a developer. It can also vary over time and geography. In any case, the edge is a place where one can position compute closer than the current Public or Private Cloud Infrastructure in order to derive additional benefits. It can vary from a regional, to a metro to a mini data center, all the way to on premise or on device cloud compute capability. Each has its distinct cost, limitation and benefit.

What are the benefits of Edge Computing?


The second question, or maybe the first one, from a pragmatic and commercial standpoint is why do we need edge computing? What are the benefits?

While these will vary depending on the consumer of the compute capability, and where the compute function is located, we can derive general benefits that will be indexed to the location. Among these, we can list data sovereignty, increased privacy and security and reduced latency, enabling cheaper (dumber) devices, the creation of new media types and new models and services.

What are the use cases of Edge Computing?

I have deployed and researched over 50 use cases of edge computing, from the banal storage, caching and streaming at the edge to the sophisticated TV production or the specialized Open RAN or telco User Plane Function or machine vision use cases for industrial and agriculture application.

What is the value of 1ms?

Sooner or later, after testing various use cases, locations and architectures, the fundamental question emerges. What is the value of 1ms? It is a question heavy with assumptions and correlations. In absolute, we would all like a connectivity that is faster, more resilient, more power efficient, economical and with lower latency. The factors that condition latency are the number of hops or devices the connection has to go through between the device and the origin point where the content or code is stored, transformed, computed and the distance between the device and the compute point. To radically reduce latency, you have to reduce the number of hops or reduce the distance, Edge Computing achieves both.

But obviously, there is a cost. The latency will be proportional to the distance,  so the fundamental question becomes what is the optimal placement of a compute resource, for which use case? Computing is a continuum and some applications and workload are not latency or privacy or sovereignty sensitive and can run on an indiscriminate public cloud, while others necessitate the compute to be in the same country, region or city. Others even require a closer proximity. The difference is staggering in terms of investments between a handful of centralized data centers and several hundreds / thousands? of micro data center.

What about AI and LLM?

Until now, these questions where somewhat theoretical and were answered organically by hyperscalers and operators based on their respective view of the market evolution. Generative AI and its extraordinary appetite for compute is rapidly changing this market space. Not only Gen AI accounts for a sizable and growing portion of all cloud compute capacity, the question of latency now is getting to the fore. Gen AI relies on Large Language Models that require large amount of storage and compute, to be able to to be trained to recognize patterns. The larger the LLM, the more compute capacity, the better the pattern recognition. Pattern recognition leads to generation of similar results based on incomplete prompts / question / data set, that is Gen AI. Where does latency come in? Part of the compute to generate a response to a question is in the inference business. While the data set resides in a large compute data center in a centralized cloud, inference is closer to the user, at the edge, where it parses the request and attempts to feed the trained model with unlabeled input to receive a prediction of the answer based on the trained model. The faster the inference is, the more responses the model can provide, which means that low latency, is a competitive advantage for a gen AI service.

As we have seen there is a relatively small number of options to reduce latency and they all involve large investment. The question then becomes: what s the value of a millisecond? Is 100 or 10 sufficient? When it comes to high frequency trading, 1ms is extremely valuable (billions of dollars). When it comes to online gaming, low latency is not as valuable as controlled and uniform latency across the players. When it comes to video streaming, latency is generally not an issue, but when it comes to machine vision for sorting fruits on a mechanical conveyor belt running at 10km/h, it is very important.

I have researched and deployed many edge computing use cases and derived a fairly comprehensive workshop on the technological, commercial and strategic aspects of Edge computing and Data Center investment strategies.

If you would like to know more, please get in touch.

Thursday, November 23, 2023

Announcing Private Networks 2024


Telecoms cellular networks, delivered by network operators, have traditionally been designed to provide coverage and best effort performance for consumers' general use. This design prioritizes high population density areas, emphasizing cost-effective delivery of coverage solutions with a network architecture treating all connections uniformly, effectively sharing available bandwidth. In some markets, net neutrality provisions further restrict the prioritization of devices, applications, or services over others.

Enterprises, governments, and organizations often turn to private networks due to two primary reasons. First, there may be no commercial network coverage in their operational areas. Second, even when commercial networks are present, they may fail to meet the performance requirements of these entities. Private networks offer a tailored solution, allowing organizations to have dedicated, secure, and high-performance connectivity, overcoming limitations posed by commercial networks.

Enterprise, industries, and government IT departments have developed a deep understanding of their unique connectivity requirements over the years. Recognizing the critical role that connectivity plays in their operations, these entities have sought solutions that align closely with their specific needs. Before the advent of 5G technology, Wi-Fi emerged as a rudimentary form of private networks, offering a more localized and controlled connectivity option compared to traditional cellular networks. However, there were certain limitations and challenges associated with Wi-Fi, and the costs of establishing and operating fully-fledged private networks were often prohibitive.

Enterprises, industries, and government organizations operate in diverse and complex environments, each with its own set of challenges and requirements. These entities understand that a one-size-fits-all approach to connectivity is often inadequate. Different sectors demand varied levels of performance, security, and reliability to support their specific applications and processes. This understanding has driven the search for connectivity solutions that can be tailored to meet the exacting standards of these organizations.

Wi-Fi technology emerged as an early solution that provided a degree of autonomy and control over connectivity. Enterprises and organizations adopted Wi-Fi to create local networks within their premises, enabling wireless connectivity for devices and facilitating communication within a confined area. Wi-Fi allowed for the segmentation of networks, offering a level of privacy and control that was not as pronounced in traditional cellular networks.

However, Wi-Fi also came with its limitations. Coverage areas were confined, and the performance could be affected by interference and congestion, especially in densely populated areas. Moreover, the security protocols of Wi-Fi, while evolving, were not initially designed to meet the stringent requirements of certain industries, such as finance, healthcare, or defense.

Establishing and operating private networks before the advent of 5G technology posed significant financial challenges. The infrastructure required for a dedicated private network, including base stations, networking equipment, and spectrum allocation, incurred substantial upfront costs. Maintenance and operational expenses added to the financial burden, making it cost-prohibitive for many enterprises and organizations to invest in private network infrastructure.

Moreover, the complexity of managing and maintaining a private network, along with the need for specialized expertise, further elevated the costs. These challenges made it difficult for organizations to justify the investment in a private network, especially when commercial networks, despite their limitations, were more readily available and appeared more economically feasible.

The arrival of 5G technology has acted as a game-changer in the landscape of private networks. 5G offers the potential for enhanced performance, ultra-low latency, and significantly increased capacity. These capabilities address many of the limitations that were associated with Wi-Fi and earlier generations of cellular networks. The promise of 5G has prompted enterprises, industries, and government entities to reassess the feasibility of private networks, considering the potential benefits in terms of performance, security, and customization.

The growing trend of private networks can be attributed to several key factors:

  • Performance Customization: Private networks enable enterprises and organizations to customize their network performance according to specific needs. Unlike commercial networks that provide best effort performance for a diverse consumer base, private networks allow for tailored configurations that meet the unique demands of various industries
  • Security and Reliability: Security is paramount for many enterprises and government entities. Private networks offer a higher level of security compared to public networks, reducing the risk of cyber threats and unauthorized access. Additionally, the reliability of private networks ensures uninterrupted operations critical for sectors like finance, healthcare, and defense.
  • Critical IoT and Industry 4.0 Requirements: The increasing adoption of Industrial IoT (IIoT) and Industry 4.0 technologies necessitates reliable and low-latency connectivity. Private networks provide the infrastructure required for seamless integration of IoT devices, automation, and real-time data analytics crucial for modern industrial processes.
  • Capacity and Bandwidth Management: In sectors with high data demands, such as smart manufacturing, logistics, and utilities, private networks offer superior capacity and bandwidth management. This ensures that enterprises can handle large volumes of data efficiently, supporting data-intensive applications without compromising on performance.
  • Flexibility in Deployment: Private networks offer flexibility in deployment, allowing organizations to establish networks in remote or challenging environments where commercial networks may not be feasible. This flexibility is particularly valuable for industries such as mining, agriculture, and construction.
  • Compliance and Control: Enterprises often operate in regulated environments, and private networks provide greater control over compliance with industry-specific regulations. Organizations can implement and enforce their own policies regarding data privacy, network access, and usage.
  • Edge Computing Integration: With the rise of edge computing, private networks seamlessly integrate with distributed computing resources, reducing latency and enhancing the performance of applications that require real-time processing. This is particularly advantageous for sectors like healthcare, where quick data analysis is critical for patient care.

As a result of these factors, the adoption of private networks is rapidly becoming a prominent industry trend. Organizations across various sectors recognize the value of tailored, secure, and high-performance connectivity that private networks offer, leading to an increasing shift away from traditional reliance on commercial cellular networks. This trend is expected to continue as technology advances and industries increasingly prioritize efficiency, security, and customized network solutions tailored to their specific operational requirements.

With the transformative potential of 5G, these entities are now reevaluating the role of private networks, anticipating that the advancements in technology will make these networks more accessible, cost-effective, and aligned with their specific operational requirements.

Terms and conditions available on demand: patrick.lopez@coreanalysis.ca  

Tuesday, November 7, 2023

What's behind the operators' push for network APIs?

 


As I saw the latest announcements from GSMA, Telefonica and Deutsche Telekom, as well as the asset impairment from Ericsson on Vonage's acquisition, I was reminded of the call I was making three years ago for the creation of operators platforms.

One one hand, 21 large operators (namely, America Movil, AT&T, Axiata, Bharti Airtel, China Mobile, Deutsche Telekom, e& Group, KDDI, KT, Liberty Global, MTN, Orange, Singtel, Swisscom, STC, Telefónica, Telenor, Telstra, Telecom Italia (TIM), Verizon and Vodafone) within the GSMA launch an initiative to open their networks to developers with the launch of 8 "universal" APIs (SIM Swap, Quality on Demand, Device Status, Number Verification, Simple Edge Discovery, One Time Password SMS, Carrier Billing – Check Out and Device Location). 

Additionally, Deutsche Telekom was first to pull the trigger on the launch of their own gateway "MagentaBusiness API" based on Ericsson's depreciated asset. The 3 APIs launched are Quality-on-demand, Device Status – Roaming and Device Location, with more to come.

Telefonica, on their side launched shortly after DT their own Open Gateway offering with 9 APIs (Carrier Billing, Know your customer, Number verification, SIM Swap, QOD, Device status, Device location, QOD wifi and blockchain public address).

On the other hand, Ericsson wrote off 50% of the Vonage acquisition, while "creating a new market for exposing 5G capabilities through network APIs".

Dissonance much? why are operators launching network APIs in fanfare and one of the earliest, largest vendor in the field reporting asset depreciation while claiming a large market opportunity?

The move for telcos to exposing network APIs is not new and has had a few unsuccessful aborted tries (GSMA OneAPI in 2013, DT's MobiledgeX launch in 2019). The premises have varied over time, but the central tenet remains the same. Although operators have great experience in rolling out and operating networks, they essentially have been providing the same connectivity services to all consumers, enterprises and governmental organization without much variation. The growth in cloud networks is underpinned by new generations of digital services, ranging from social media, video streaming for consumers and cloud storage, computing, CPaaS and IT functions cloud migration for enterprises. Telcos have been mostly observers in this transition, with some timid tries to participate, but by and large, they have been quite unsuccessful in creating and rolling out innovative digital services. As Edge computing and Open RAN RIC become possibly the first applications forcing telcos to look at possible hyperscaler tie-ins with cloud providers, it raises several strategic questions.

Telcos have been using cloud fabric and porting their vertical, proprietary systems to cloud native environment for their own benefit. As this transition progresses, there is a realization that private networks growth are a reflection of enterprises' desire to create and manage their connectivity products themselves. While operators have been architecting and planning their networks for network slicing, hoping to sell managed connectivity services to enterprises, the latter have been effectively managing their connectivity, in the cloud and in private networks themselves without the telcos' assistance. This realization leads to an important decision: If enterprises want to manage their connectivity themselves and expand that control to 5G / Cellular, should Telcos let them and if yes, by what means?

The answer is in network APIs. Without giving third party access to the network itself, the best solution is to offer a set of controlled, limited, tools that allow to discover, reserve and consume network resources while the operator retains the overall control of the network itself. There are a few conditions for this to work. 

The first, is essentially the necessity for universal access. Enterprises and developers have gone though the learning curve of using AWS, Google cloud and Azure tools, APIs and semantic. They can conceivably see value in learning a new set with these Telco APIs, but wont likely go through the effort if each Telco has a different set in different country.

The second, and historically the hardest for telcos is to create and manage an ecosystem and developer community. They have tried many times and in different settings, but in many cases have failed, only enlisting friendly developers, in the form of their suppliers and would be suppliers, dedicating efforts to further their commercial opportunities. The jury is still out as to whether this latest foray will be successful in attracting independent developers.

The third, and possibly the most risky part in this equation, is which APIs would prove useful and whether the actual premise that enterprises and developers will want to use them is untested. Operators are betting that they can essentially create a telco cloud experience for developers more than 15 years after AWS launched, with less tools, less capacity to innovate, less cloud native skills and a pretty bad record in nurturing developers and enterprises.

Ericsson's impairment of Vonage probably acknowledges that the central premise that Telco APIs are desirable is unproven, that if it succeeds, operators will want to retain control and that there is less value in the platform than in the APIs themselves (the GSMA launch on an open source platform essentially directly depreciates the Vonage acquisition).

Another path exist, which provides less control (and commercial upside) for Telcos, where they would  host third party cloud functions in their networks, even allowing third party cloud infrastructure (such as Amazon Outpost for instance) to be collocated in their data centers. This option comes with the benefit of an existing ecosystem, toolset, services and clients, just extending the cloud to the telco network. The major drawback is that the telco accepts their role as utility provider of connectivity with little participation in the service value creation.

Both scenarios are being played out right now and both paths represent much uncertainty and risks for operators that do not want to recognize the strategic implications of their capabilities.


Friday, November 3, 2023

Telco edge compute, RAN and AI


In recent years, the telecommunications industry has witnessed a profound transformation, driven by the rapid penetration of cloud technologies. Cloud Native Functions have become common in the packet core, OSS BSS, transport and are making their way in the access domain, both fixed and mobile. CNFs mean virtual infrastructure management and data centers have become an important part of network capex strategies. 

While edge computing in telecoms, with the emergence of MEC (Multi Access Edge Computing), has been mostly confined to telco network functions (UPF, RAN CU/DU...) network operators should now explore the opportunities for retail and wholesale of edge computing services. My workshop examines in details the strategies, technologies and challenges associated with this opportunity.

Traditional centralized cloud infrastructure is being augmented with edge computing, effectively bringing computation and data storage closer to the point of data generation and consumption.

What are the benefits of edge computing for telecom networks?

  • Low Latency: One of the key advantages of edge computing is its ability to minimize latency. This is of paramount importance in telecoms, especially in applications like autonomous vehicles, autonomous robots / manufacturing, and remote-controlled machinery.
  • Bandwidth Efficiency: Edge computing reduces the need for transmitting massive volumes of data over long distances, which can strain network bandwidth. Instead, data processing and storage take place at the edge, significantly reducing the burden on core networks. This is particularly relevant for machine vision, video processing and AI use cases.
  • Enhanced Security: Edge computing offers improved security by allowing sensitive data to be processed locally. This minimizes the exposure of critical information to potential threats in the cloud. Additionally, privacy, data sovereignty and residency concerns can be efficiently addressed by local storage / computing.
  • Scalability: Edge computing enables telecom operators to scale resources as needed, making it easier to manage fluctuating workloads effectively.
  • Simpler, cheaper devices: Edge computing allows devices to be cheaper and simpler while retaining sophisticated functionalities, as storage, processing can be offloaded to a nearby edge compute facility.

Current Trends in Edge Computing for Telecoms

The adoption of edge computing in telecoms is rapidly evolving, with several trends driving the industry forward:

  • 5G and private networks Integration: The deployment of 5G networks is closely intertwined with edge computing. 5G's high data transfer rates and low latency requirements demand edge infrastructure to deliver on its promises effectively. The cloud RAN and service based architecture packet core functions drive demand in edge computing for the colocation of UPF and CU/DU functions, particularly for private networks.
  • Network Slicing: Network operators are increasingly using network slicing to create virtualized network segments, allowing them to allocate resources and customize services for different applications and use cases.
  • Ecosystem Partnerships: Telcos are forging partnerships with cloud providers, hardware manufacturers, and application developers to explore retail and wholesale edge compute services.

Future Prospects

The future of edge computing in telecoms offers several exciting possibilities:
  • Edge-AI Synergy: As artificial intelligence becomes more pervasive, edge computing will play a pivotal role in real-time AI processing, enhancing applications such as facial recognition, autonomous drones, and predictive maintenance. Additionally, AI/ML is emerging as a key value proposition in a number of telco CNFs, particularly in the access domain, where RAN intelligence is key to optimize spectrum and energy usage, while tailoring user experience.
  • Industry-Specific Edge Solutions: Different industries will customize edge computing solutions to cater to their unique requirements. This could result in the development of specialized edge solutions for healthcare, manufacturing, transportation, and more.
  • Edge-as-a-Service: Telecom operators are likely to offer edge services as a part of their portfolio, allowing enterprises to deploy and manage edge resources with ease.
  • Regulatory Challenges: As edge computing becomes more integral to telecoms, regulatory challenges may arise, particularly regarding data privacy, security, and jurisdictional concerns.

New revenues streams can also be captured with the deployment of edge computing.

  • For consumers, it is likely that the lowest hanging fruit in the short term is in gaming. While hyperscalers and gaming companies have launched their own cloud gaming services, their success has been limited due to the poor online experience. The most successful game franchises are Massive Multiplayer Online. They pitch dozens of players against each other and require a very controlled latency between all players for a fair and enjoyable gameplay. Only operators can provide controlled latency if they deploy gaming servers at the edge. Without a full blown gaming service, providing game caching at the edge can drastically reduce the download time for games, updates and patches, which increases dramatically player's service satisfaction.
  • For enterprise users, edge computing has dozens of use cases that can be implemented today that are proven to provide superior experience compared to the cloud. These services range from high performance cloud storage, to remote desktop, video surveillance and recognition.
  • Beyond operators-owned services, the largest opportunity is certainly the enablement of edge as a service (EaaS), allowing cloud developers to use edge resources as specific cloud availability zones.
Edge computing is rapidly maturing in the telecom industry by enabling low-latency, high-performance, and secure services that meet the demands of new use cases. As we move forward, the integration of edge computing with 5G and the continuous development of innovative applications will shape the industry's future. Telecom operators that invest in edge computing infrastructure and capabilities will be well-positioned to capitalize on the opportunities presented by this transformative technology. 


Thursday, July 27, 2023

The 5G letdown


 I have often written about what I think are the necessary steps for network operators to grow and prosper in our digital world. Covid, the changes in work modes, the hiring gluttony of the GAFAs, the geopolitical situation, between the banning of untrusted vendors and the consequences of a European conflicts have created quite a different situation today. 

Twitter or X reorganization and mass layoffs signaled the tech industry that it was ok to look for productivity and profitability and that over-hiring without a clear mission or reorienting companies entire strategies on far fetched, unproven concepts (web3, metaverse, crypto...) had very costly consequences. Fast forward to this summer of 2023, most GAFAs have been refocusing their efforts into their core business, with less intent on changing the telecoms landscape. This lull has allowed many network operators to post healthy growth and profits, while simultaneously laying off / fast tracking early retirement for some of their least adequately skilled personnel.

I think that a lot of these positive telco results are conjunctural, rather than structural and one crucial issue remains for operators (and their suppliers). 5G is a bust. So far.

The consumer market is not really looking for more speed at this time. The main selling proposition of 5G seems to have a 5G logo on your phone. I have 4G and 5G phones and I can't really tell the difference from a network user experience standpoint. 

No real 5G use case has emerged to justify the hype, and all in all, consumers are more likely to fork out 1000s of $ for a new device, rather than an additional 10 per month for a "better" connectivity. Especially since, us, telco literati know that 5G Non Stand Alone, is not really 5G, more like a 4G +. Until 5G Stand Alone emerges dominantly, the promises of 5G wont be fulfilled.  

The promise and business case of 5G was supposed to revolve around new connectivity services. Until now, essentially, whether you have a smartphone, a tablet, a laptop, a connected car, an industrial robot and whether you are a working from home or road warrior professional, all connectivity products are really the same. The only variable are the price and coverage.

5G was supposed to offer connectivity products that could be adapted to different device types, verticals and industries, geographies, vehicles, drones,... The 5G business case hinges on enterprises, verticals and government adoption and willingness to pay for enhanced connectivity services. By and large, this hasn't happened yet. There are several reasons for this, the main one being that to enable these, a network overall is necessary.

First, a service-based architecture is necessary, comprising 5G Stand Alone, Telco cloud and Multi-Access Edge Computing (MEC), Service Management and Orchestration are necessary. Then, cloud-native RAN, either cloud RAN or Open RAN (but particularly the RAN Intelligent Controllers - RICs)  would be useful. All this "plumbing" to enable end to end slicing, which in turn will create the capabilities to serve distinct and configurable connectivity products.

But that's not all... A second issue is that although it is accepted wisdom that slicing will create connectivity products that enterprises and governments will be ready to pay for, there is little evidence of it today. One of the key differentiators of the "real" 5G and slicing will be deterministic speed and latency. While most actors of the market are ready to recognize that in principle a controllable latency would be valuable, no one really knows the incremental value of going from variable best effort to deterministic 100, 10 or 5 millisecond latency.

The last hurdle, is the realization by network operators that Mercedes, Wallmart, 3M, Airbus... have a better understanding of their connectivity needs than any carrier and that they have skilled people able to design networks and connectivity services in WAN, cloud, private and cellular networks. All they need is access and a platform with APIs. A means to discover, reserve, design connectivity services on the operator's network will be necessary and the successful operators will understand that their network skillset might be useful for consumers and small / medium enterprises, but less so for large verticals, government and companies.

My Telco Cloud + Edge Computing and Open RAN workshops examine the technologies, use cases, implementations, strategies, operators and vendors who underlie the key growth factors for telco operators' and vendors' success in the "real" 5G.



Tuesday, October 6, 2020

Telco grade or Cloud grade?

 

For as long as I can remember, working in Telco, there has been the assumption that Telco networks were special. 

They are regulated, they are critical infrastructure, they require a level of engineering and control that goes beyond traditional IT. This has often been the reason why some technologies and vendors haven't been that successful in that space, despite having stellar records in other equally (more?) demanding industries such as energy, finance, space, defence...

Being Telco grade, when I cut my teeth as a telco supplier, meant high availability (5x9's), scalability and performance (100's of millions of simultaneous streams, connections, calls, ...), security, achieved with multiple vertical and horizontal redundancies, and deployed of highly specialized appliances.

Along comes the Cloud, with its fancy economics, underpinned by separation of hardware and software, virtualization, then decomposition, then disaggregation of software elements into microservices. Add to it some control / user plane separation, centralized control, management, configuration, deployment, roll out, scalability rules... a little decentralized telemetry and systematic automation through radical opening of API between layers... That's the recipe for Cloud grade networks.

At the beginning, the Telco-natives looked at these upstarters with a little disdain, "that's good for web traffic. If a request fail, you just retry, it will never be enough for Telco grade...". 

Then with some interest "maybe we can use that Cloud stuff for low networking, low compute stuff like databases, inventory management... It's not going to enable real telco grade stuff, but maybe there is some savings".

Then, more seriously "we need to harness the benefits of the cloud for ourselves. We need to build a Telco cloud". This is about the time the seminal white paper on Telco virtualization launched NFV and a flurry of activities to take IT designed cloud fabric (read Openstack) and make it Telco grade (read pay traditional Telco vendors who have never developed or deployed a cloud fabric at scale and make proprietary branches of an open source project hardened with memorable features such as DPDK SR-IOV, CPU pinning so that the porting of their proprietary software on hypervisor does not die under the performance SLA...). 

Fast forward a few years, orchestration and automation become the latest targets, and a zoo of competing proprietary-turned-open-source projects start to emerge, whereas large communities of traditional telco vendors are invited to contribute charitably time and code on behalf of Telcos for projects that they have no interest in developing or selling.

In the meantime, Cloud grade has grown in coverage, capacity, ecosystem, revenues, use cases, flexibility, availability, scalability... by almost any metrics you can imagine, while reducing costs and prices. Additionally, we are seeing new "cloud native" vendors emerge with Telco products that are very close to the Telco grade ideal in terms of performance, availability, scalability, at a fraction of the cost of the Telco-natives. Telco functions that the Telco-native swore could never find their way to the cloud are being deployed there, for security, connectivity, core networks, even RAN...

I think it is about time that the Telco-natives accept and embrace that it is probably faster, more cost efficient and more scalable to take a Cloud-native function and make it Telco-grade than trying to take the whole legacy Telco network and trying to make it Cloud grade. It doesn't mean to throw away all the legacy investment, but at least to consider sunsetting strategy and cap and grow. Of course, it means also being comfortable with the fact that the current dependencies of traditional Telco vendors might have to be traded for dependencies on hyperscalers, who might, or not become competitors down the line. Not engaging with them, si not going to change that fact. 5G stand alone, Open RAN or MEC are probably good places to start, because they are greenfield. This is where the smart money is these days, as entry strategy into Telco world goes...



Monday, May 25, 2020

Why telco operators need a platform for edge computing


Initially published in The Mobile Network.

Extracted from the edge computing and hybrid cloud 2020 report.

Edge computing and hybrid clouds have become subjects of many announcements and acquisitions over the last months.
Edge computing, in order to provide a capacity for developers and third party to reserve and consume operators computing, storage and networking capacity need a platform. The object of this platform is to provide a web interface and series of APIs to abstract network topology and complexity and offer developers a series of cloud services and product to package within their offering. Beyond hyperscalers who have natively developed these platforms, a few vendors have emerged in the telco space, such as MobiledgeX and ORI Industries.
Network operators worldwide are confronted with the inexorable growth of their data traffic due to the consumers’ voracious appetite for video streaming and gaming. Since video content is the largest and fastest growing data type in the networks, an economical challenge is slowly arising. Data charging models have departed from per Megabyte metered billing to bundles and unlimited data, which encourages traffic growth, while reducing the operators’ capacity to monetize this growth. Consumers are not willing to pay much more for a HD video versus Standard Definition. For them, it is essentially the same service and the operator is to blame if the quality is not sufficient. Unfortunately, the problem is likely to accelerate with emerging media hungry video services relying on 4K, 8K and Augmented Reality. As a consequence, the average revenue per user stagnates in most mature markets, while costs continue to rise to increase networks capacity.
While 5G promises extraordinary data speeds, enough to complement or equal fibre fixed capacity, there is no real evidence that the retail consumer market will be willing to pay a premium for improved connectivity. If 5G goes the way of 4G, the social media, video streaming, gaming services and internet giants will be the ones profiting from the growth in digital services. The costs for deploying 5G networks will range in the low to double digit billions, depending on the market, so… who will foot the bill?
If properly executed, the 5G roll out will become in many markets the main broadband access at scale. As this transition occurs, new opportunities arise to bundle mobile connectivity with higher level services, but because the consumer market is unlikely to drastically change its connectivity needs in the short term, the enterprise market is the most likely growth opportunity for 5G in the short to medium term.
Enterprise themselves are undergoing a transformation, with the commoditization of cloud offering.
Cloud is one of the fastest growing ICT businesses worldwide, with IaaS the fastest growing segment. Most technology companies are running their business on cloud technology, be it private or public and many traditional verticals are now considering the transition.
Telecom operators have mostly lost the cloud battle - AWS, Microsoft, Google, Alibaba have been able to convert their global network of data centers into an elastic, on-demand as-a-service economy.
Edge computing, the deployment of mini data centers in telco networks promises to deliver a range of exciting new digital services. It may power remote surgery, self driving cars, autonomous industrial robots, drone swarms and countless futuristic applications.
In the short term, though, the real opportunity is for network operators to rejoin the cloud value chain, by providing a hyper local, secure, high performance, low latency edge cloud that will complement the public and private clouds deployed today.
Most private and public clouds ultimately stumble upon the “last mile” issue. Not managing the connectivity between the CPE, the on-premise data center and the remote data center means more latency, less control and more possibility for hacking or privacy issues.
Operators have a chance to partner with the developers’ community and provide them with a cloud flavour that extends and improve current public and private cloud capabilities.
The edge computing market is still emerging, with many different options in terms of location, distribution, infrastructure and management, but what is certain is that it will need to be more of a cloud network than a telco network if it succeeds in attracting developers.
Beyond the technical details that are being clarified by deployments and standards, the most important gap network operators need to bridge with a true cloud experience is the platform. Operators traditionally have deployed private cloud for their own purpose -  to manage their network. These clouds do not have all the traditional features we can expect from commercial public cloud (lifecycle management, third party authentication, reservation, fulfillment…). The key for network operators to capture the enterprise opportunity is to offer a set of APIs that are as simple as those from the public clouds, so that developers and enterprise may reserve, consume and pay for edge computing and connectivity workloads and pipelines.
A possible outcome of this need if operators do not open their private cloud to enterprises is that hyperscalers will expand their clouds to operators’ networks and provide these services to their developer and client community. This would mean that operators would be confined to a strict connectivity utility model, where traffic prices would inexorably decline due to competitive pressure and high margin services would be captured by the public cloud.
  • Edge computing can allow operators to offer IaaS and PaaS services to enterprises and developers with unparalleled performance compared to traditional clouds:
  • Ultra-low and guaranteed latency (typically between 3 -25ms between the CPE and the first virtual machine in the local cloud)
  • Guaranteed performance (up to 1Gps in fibre and 300Mbps in cellular)
  • Access to mobile edge computing (precise user location, authentication, payment, postpaid / prepaid, demographics… depending on operators’ available APIs)
  • Better than cloud, better than WIFI services and connectivity (storage, video production, remote desktop, collaboration, autonomous robots,…)
  • Flexible deployment and operating models (dedicated, multi-tenant…)
  • Local guaranteed data residency (legal, regulatory, privacy compliant)
  • Reduce cloud costs (data thinning and preprocessing before transfer to the cloud)
  • High performance ML and AI inferring
  • Real time guiding and configuration of autonomous systems


It is likely that many enterprise segments will want to benefit from this high-performance cloud. It is also unlikely that operators alone will be able to design products and services for every vertical and segment. Operators will probably focus on a few specific accounts and verticals, and cloud integration providers will rush in to enable specific market edge cloud and connectivity services:
  • Automotive
  • Transport
  • Manufacturing
  • Logistics
  • Retail
  • Banking and insurances
  • IoT
  • M2M…

Each of these already have connectivity value chain, where network operators are merely a utility provider for higher value services and products. Hybrid local cloud computing offer the operators the opportunity to go up the value chain by providing new and enhanced connectivity and computing products directly to consumers (B2C), enterprises (B2B) and developers (B2B2x).

Fixed and mobile networks have not been designed to expose their capabilities to third party for reservation, consumption and payment of discrete computing and connectivity services. Edge computing, as a new greenfield environment is a great place to start if an operator would like to offer these types of services. Because it is new, there is no legacy deployed and the underlying technology is closer to cloud native. This is necessary to create a developer and enterprise platform. Nonetheless, an abstraction layer is necessary to federate and orchestrate the edge compute infrastructure and provide a web-based authentication, management, reservation, fulfillment, consumption and payment model for enterprises and developers to contract these new telco services.
This is what a platform provides. An abstraction layer, that hides telco networks complexity, federates all edge computing capacity across various networks and operators and present a coherent marketplace for enterprise and developers to build and consume new services offered by the operator community as IaaS, PaaS and SaaS. By deploying a platform, operators can reintegrate the cloud supply chain, but they will have to decide whether they want to own the developer relationship (and build their own platform) or benefit from existing ecosystems (and deploy an existing third party platform). In the first case, it is a great effort, but the revenues flow directly to the operator, the platform is just another technology layer. In the second, revenues go to the platform provider and are shared with the operator. It provides faster time to market, but less control and margin. This model, in my mind is inevitable, it remains to be seen whether operators will be able to develop and deploy the first one in time and at scale.

Friday, May 8, 2020

What are today's options to deploy a telco cloud?

Over the last 7 years, we have seen leading telcos embracing cloud technology as a mean to create an elastic, automated, resilient and cost effective network fabric. There has many different paths and options from a technological, cultural and commercial perspective.

Typically, there are 4 categories of solutions telcos have been exploring:

  • Open source-based implementation, augmented by internal work
  • Open source-based implementation, augmented by traditional vendor
  • IT / traditional vendor semi proprietary solution
  • Cloud provider solution


The jury is still out as to which option will prevail, as they all have seen growing pains and setbacks.

Here is a quick cheat sheet of some possibilities, based on your priorities:



Obviously, this table changes quite often, based on progress and announcements of the various players, but it can come handy if you want to evaluate, at high level, what are some of the options and pros / cons of deploying one vendor or open source project vs another.

Details, comments are part of my workshops and report on telco edge and hybrid cloud networks.

Thursday, April 23, 2020

Hyperscalers enter telco battlefront

We have, over the last few weeks, seen a flurry a announcements from hyperscalers investing in telco infrastructure and networks. Between Facebook's $5.7B investment in India's Jio Reliance, to Microsoft's acquisition of Affirmed Networks for $1.35B or even AWS' launch of Outpost and Google's Anthos ramp up.


Why are hyperscalers investing in telecom gear and why now?

Facebook had signalled its intent as far as 2016 when Mark Zuckerberg presented at mobile world congress his vision for the future of the company.


Beyond the obvious transition from picture and video sharing to virtual / augmented reality, tucked-in in the top right, are two innocuous words “telco infra”.
What Facebook realized is that basically anyone who has regular access to broadband will likely use a Facebook service. One way to increase the company’s growth is to invent / buy / promote more services, which is costly and uncertain. Another way is simply to connect more people.
With over 2,5 billion Facebook products users, the company still has some space to grow in this area, but the key limiting factor seems to be connectivity itself. The last billions of broadband unconnected are harder to attain because traditional telecom networks do not reach there. The last unconnected are mostly in rural area. Geographically disperse, with a lower income than their urban counterparts.
Looking at this problem from their perspective, Facebook reached a similar conclusion to the network operators operating in these markets. Traditional telco networks are too expensive to deploy and maintain to reach this population sustainably. The same tactics employed by operators to disaggregate and stimulate the infrastructure market can be refocused and better stimulated by Facebook.
This was the start of Facebook Connectivity, a specific line of business in the social media’s giant empire to change the cost structure of telco networks. Facebook connectivity has evolved to encompass a variety of efforts, ranging from the creation of TIP (an open forum to disaggregate and open telco networks), the co investment with Telefonica in a Joint Venture dedicated to connect the unconnected in latin america and this week, the announcement of its acquisition of 9.9% of Jio Reliance in India.


How about Microsoft, Google and others?

Google had, before the recent open source cloud platform Anthos dug their toes in telco water with project Fi and its fiber businesses.
Microsoft has been trying for he last 5 years to exploit the transition in telco networks from proprietary to IT. Even IBM's Redhat acquisition had a telco interest, as the giants also try to become a more prevalent vendor in the telco ecosystem.

So... why now?

Another powerful pivot point in Telecom is the emergence of 5G. As the latest telephony technology generation rolls out, telco networks are undeniably being re-architected and redesigned to look more like cloud networks. This creates an interesting set of risks and opportunities for incumbents and new entrants alike.
For operators, the main interest is to drastically reduce the cost of rolling out and maintaining complex telco networks by using powerful virtualization, SDN and automation techniques that have allowed hyperscalers to dominate cloud computing. These technologies, if applied correctly can transform the cost structure of network operators, particularly important at the outset of multi billion dollars investment in 5G infrastructure. The radical cost structure disruption comes from disaggregation of the network between hardware and software, the introduction of new vendors in the value chain who drive price pressure on incumbents and the widespread automation and cloud economics.
These opportunities bring also new risks. While they open up the supply chain with the introduction of new vendors, they also allow new actors to enter the value chain, either to substitute and dominate legacy vendors or create new control points (see the orchestrator wars I have been mentioning in previous posts). The additional risk is that the cost of entry into telco becomes lower for cloud hyperscalers as the technology to run telco networks transitions from proprietary closed ecosystem to open source, cloud environment.

The last pivot point is another telco technology that is very specifically aimed at creating a cloud environment in telco networks: Edge computing. It creates a cloud layer that can allow the provision, reservation and consumption of telco connectivity, together with cloud computing. As a greenfield environment, it is a natural entry point for cloud operators and new vendors alike to enter the telco ecosystem.

Facebook, Google, AWS, Microsoft and others seem to think that 5G and edge computing in particular will be more cloud than telco. Network operators try to resist this claim by building a 5G network that will be a fully integrated connectivity and computing experience, complementary to public clouds, but different enough to command a premium, a different value chain and operator control.

In which direction will the market move? This and more in my report and workshop Edge computing and Hybrid Clouds 2020.

Wednesday, April 15, 2020

The business cases of edge computing

Edge computing has been a trendy topic over the last year. Between AWS' launch of Outpost, Microsoft continuous effort with Azure Stack, Nvidia's specialized gaming version EGX platform or even Google's Anthos toolkit, much has been said about this market segment.
Network operators, on their side, have announced plans for deployments in many geographies, but with little, in terms of specific new services, revenues or expected savings.
Having been in the middle of several of these discussions, between vendors, hyperscalers, operators and systems integrators, I am glad to share a few thoughts on the subject.

Hyperscalers have not been looking at edge computing as a new business line, but rather as an extension of their current cloud capabilities. There are many use cases today that cannot be fully satisfied by the cloud, due to a combination of high / variable latency, network congestion, and lack of visibility / control of the last mile connectivity.
For instance, anyone having tried to edit online a diagram in powerpoint office 365 or to play a massive multiplayer online cloud game will recognize how maddeningly frustrating the experience can be.
Edge computing, as in bringing cloud resources closer physically to where data is consumed / produced makes sense to reduce latency and the need for on-premise dedicated resources. From an hyperscaler's perspective, edge computing can be as simple as dropping a few racks within an operator data center to allow their clients to use and configure new availability zones with specific performance and price.

Network operators, who have largely lost the cloud computing wholesale market to the hyperscalers, see edge computing as an opportunity to reintegrate the value chain, by offering cloud-like services at incomparable performance. Ideally, they would like to capture and retain the emerging high performance cloud computing market that will be sure to spurn a new category of digital services ranging from AI-augmented manufacturing and automation, autonomous vehicles, ubiquitous facial and object recognition and compute-less smart devices. The problem is that a lot of these hypothetical services are ill-defined, far fetched and futuristic, which does not inspire sufficient confidence to the CFO that has to approve multi - billion capital expenditure to get going.
But surely, if the likes of Microsoft, Intel, HP, Google, Facebook, AWS are investing in Edge Computing there must be something there? What are the operators missing to make the edge computing business case positive?

Mobile or multi access edge computing?

Many operators looked at edge computing first from the perspective of mobile. The mobile edge computing business case remains extremely uncertain. There is no identified use case that justifies the cost to deploy thousands of mini compute capabilities at mobile site in the short term. Even with the perspective of upgrading networks to 5G, the added cost of mobile edge computing is hard to justify.

If not in mobile site, the best bet to deploy edge computing for network operators is in Central Offices (CO). These facilities house commuting platforms for copper, fiber, DSL connectivity and are overdue for upgrade in many markets. The deployment of fibre, the copper replacement and the evolution of technology from GPON to XGS-PON and PON2 are excellent windows of opportunity to replace aging single-purposes infrastructure with open, software defined computing capability.
The level of investment for central offices retooling into mini data centers is orders of magnitude lower than the mobile case, and is completely flexible. It is not necessary to change all central offices, one can proceed by deploying one per state / province / region and increase capillary as business dictates.

What use cases would make edge computing's business case positive for operators in that scenario?


  • First, for operators who have triple and quadruple play, the opportunity to replace aging dedicated infrastructure for TV, fixed telephony, enterprise and residential connectivity by cloud native software defined open architecture provides interesting savings and benefits. The savings are realized from the separation of hardware and software, the sourcing and deployment of white boxes and the opex savings of separating control plane and centralizing and automating service elasticity. 
  • Additional savings are to be had with the deployment at the edges of content / video caches. Particularly for TV providers who see the increase of on-demand and unicast live traffic, positioning edge caches allow up to 80% savings in content transport. This is likely to increase with the upgrade from HD to 4K, 8K and growth in AR/VR.
  • At last, for operators who are deploying their CPE in their customers' home, edge computing allows to simplify and reduce drastically the cost of these equipments and their deployment / maintenance by bringing the services into the Central Office and reducing the need for storage and compute in the CPE.

While the savings can be significant in the long run, no operator can justify substituting existing infrastructure if its amortization is not fully realized on these premises alone. This is why some operators are looking at these scenarios only for greenfield fiber deployments or as part of massive copper replacement windows.
Savings alone in all likeliness won't allow operators to deploy at the rhythm necessary to counter hyperscalers. New revenues streams can also be captured with the deployment of edge computing.

  • For consumers, it is likely that the lowest hanging fruit in the short term is in gaming. While hyperscalers and gaming companies have launched their own cloud gaming services, their success has been limited due to the poor online experience. The most successful game franchises are Massive Multiplayer Online. They pitch dozens of players against each other and require a very controlled latency between all players for a fair and enjoyable gameplay. Only operators can provide controlled latency if they deploy gaming servers at the edge. Without a full blown gaming service, providing game caching at the edge can drastically reduce the download time for games, updates and patches, which increases dramatically player's service satisfaction.
  • For enterprise users, edge computing has dozens of use cases that can be implemented today that are proven to provide superior experience compared to the cloud. These services range from high performance cloud storage, to remote desktop, to video surveillance and recognition.
  • Beyond operators-owned services, the largest opportunity is certainly the enablement of edge as a service (EaaS), allowing cloud developers to use edge resources as specific cloud availability zones.
The main issue at this stage, for operators is to decide whether to let hyperscalers deploy their infrastructure in their network, capturing most of the value of these emerging services but also opening up a new line of revenue from wholesale hosting or trying to play it alone, as an operator or a federation of them, deploying a telco cloud infrastructure and building the necessary platform to resell edge compute resource in their networks.

This and a lot more use cases and business cases in my online workshop and report Edge Computing 2020.