Showing posts with label Ericsson. Show all posts
Showing posts with label Ericsson. Show all posts

Friday, July 5, 2024

Readout: Ericsson's Mobility Report June 2024

 


It has been a few years now, since Ericsson has taken to provide a yearly report on their view of the evolution of connectivity. Alike Cisco's annual internet report, it provides interesting data points on telecom technology and services' maturity, but focused on cellular technology, lately embracing fixed-wireless access and non terrestrial networks as well. 

In this year's edition, a few elements caught my attention:

  • Devices supporting network slicing are few and far in-between. Only iOS 17 and Android 13 support some capabilities to indicate slicing parameters to their underlying applications. These devices are the higher end latest smartphones, so it is no wonder that 5G Stand Alone is late in delivering on its promises, if end to end slicing is only possible for a small fraction of customers. It is still possible to deploy slicing without device support, but there are limitations, most notably slicing per content / service, while slicing per device or subscriber profile is possible.

  • RedCap (5G reduced Capability) for IoT, wearables, sensors, etc... is making its appearance on the networks, mostly as demo and trials at this stage. The first devices are unlikely to emerge in mass market availability until end of next year.

  • Unsurprisingly, mobile data traffic is still growing, albeit at a lower rate than previously reported with a 25% yearly growth rate or just over 6% quarterly. The growth is mostly due to smartphones and 5G penetration and video consumption, accounting for about 73% of the traffic. This traffic data includes Fixed Wireless Access, although it is not broken down. The rollout of 5G, particularly in mid-band, together with carrier aggregation has allowed mobile network operators to efficiently compete with fixed broadband operators with FWA. FWA's growth, in my mind is the first successful application of 5G as a differentiated connectivity product. As devices and modems supporting slicing appear, more sophisticated connectivity and pricing models can be implemented. FWA price packages differ markedly from mobile data plans. The former are mostly speed based, emulating cable and fibre offering, whereas the latter are usually all you can eat best effort connectivity.

  • Where the traffic growth projections become murky, is with the impact of XR services. Mixed, augmented, virtual reality services haven't really taken off yet, but their possible impact on traffic mix and network load can be immense. XR requires a number of technologies to reach maturity at the same time (bendable / transparent screens, low power, portable, heat efficient batteries, low latency / high compute on device / at the edge, high down/ up link capabilities, deterministic mash latency over an area...) to reach mass market and we are still some ways away from it in my opinion.

  • Differential connectivity for cellular services is a long standing subject of interest of mine. My opinion remains the same: "The promise and business case of 5G was supposed to revolve around new connectivity services. Until now, essentially, whether you have a smartphone, a tablet, a laptop, a connected car, an industrial robot and whether you are a working from home or road warrior professional, all connectivity products are really the same. The only variable are the price and coverage.

    5G was supposed to offer connectivity products that could be adapted to different device types, verticals and industries, geographies, vehicles, drones,... The 5G business case hinges on enterprises, verticals and government adoption and willingness to pay for enhanced connectivity services. By and large, this hasn't happened yet. There are several reasons for this, the main one being that to enable these, a network overall is necessary.

    First, a service-based architecture is necessary, comprising 5G Stand Alone, Telco cloud and Multi-Access Edge Computing (MEC), Service Management and Orchestration are necessary. Then, cloud-native RAN, either cloud RAN or Open RAN (but particularly the RAN Intelligent Controllers - RICs) would be useful. All this "plumbing" to enable end to end slicing, which in turn will create the capabilities to serve distinct and configurable connectivity products.

    But that's not all... A second issue is that although it is accepted wisdom that slicing will create connectivity products that enterprises and governments will be ready to pay for, there is little evidence of it today. One of the key differentiators of the "real" 5G and slicing will be deterministic speed and latency. While most actors of the market are ready to recognize that in principle a controllable latency would be valuable, no one really knows the incremental value of going from variable best effort to deterministic 100, 10 or 5 millisecond latency.

    The last hurdle, is the realization by network operators that Mercedes, Wallmart, 3M, Airbus... have a better understanding of their connectivity needs than any carrier and that they have skilled people able to design networks and connectivity services in WAN, cloud, private and cellular networks. All they need is access and a platform with APIs. A means to discover, reserve, design connectivity services on the operator's network will be necessary and the successful operators will understand that their network skillset might be useful for consumers and small / medium enterprises, but less so for large verticals, government and companies." Ericsson is keen to promote and sell the "plumbing" to enable this vision to MNOs, but will this be sufficient to fulfill the promise?

  • Network APIs are a possible first step to open up the connectivity to third parties willing to program it. Network APIs is notably absent from the report, maybe due to the fact that the company announced a second impairment charge of 1.1B$ (after a 2.9B$ initial write off) in less than a year on the 6.2B$ acquisition of Vonage.

  • Private networks are another highlighted trend in the report with a convincing example of an implementation with Northstar innovation program, in collaboration with Telia and Astazero. The implementation focuses on automotive applications, from autonomous vehicle, V2X connectivity, remote control... On paper, it delivers everything operators dream about when thinking of differentiated connectivity for verticals and industries. One has to wonder how much it costs and whether it is sustainable if most of the technology is provided by a single vendor.

  • Open RAN and Programmable networks is showcased in AT&T's deal that I have previously reported and commented. There is no doubt that single vendor automation, programmability and open RAN can be implemented at scale. The terms of the deal with AT&T seem to indicate that it is a great cost benefit for them. We will have to measure the benefits as the changes are being rolled out in the coming years.


Wednesday, July 3, 2024

June 2024 Open RAN requirements from Vodafone, Telefonica, Deutsche Telekom, Tim and Orange


 As is now customary, the "big 5" European operators behind open RAN release their updated requirements to the market, indicating to vendors where they should direct their roadmaps to have the most chances to be selected in these networks.

As per previous iterations, I find it useful to compare and contrast the unanimous and highest priority requirements as indications of market maturity and directions. Here is my read on this year's release:

Scenarios:

As per last year, the big 5 unanimously require support for O-RU and vDU/CU with open front haul interface on site for macro deployments. This indicates that although the desire is to move to a disaggregated implementation, with vDU / CU potentially moving to the edge or the cloud, all the operators are not fully ready for these scenario and prioritize first a deployment like for like of a traditional gnodeB with a disaggregated virtualized version, but all at the cell site. 

Moving to the high priority scenarios requested by a majority of operators, vDU/vCU in a remote site with O-RU on site makes its appearance, together for RAN sharing. Both MORAN and MOCN scenarios are desirable, the former with shared O-RU and dedicated vDU/vCU and the latter with shared O-RU, vDU and optionally vCU. In all cases, RAN sharing management interface is to be implemented to allow host and guest operators to manage their RAN resource independently.

Additional high priority requirements are the support for indoor and outdoor small cells. Indoor sharing O-RU and vDU/vCU in multi operator environments, outdoors with single operator with O-RU and vDU either co-located on site or fully integrated with Higher Layer Split. The last high priority requirement is for 2G /3G support, without indication of architecture.

Security:

The security requirements are sensibly the same as last year, freely adopting 3GPP requirements for Open RAN. The polemic around Open RAN's level of security compared to other cloud virtualized applications or traditional RAN architecture has been put to bed. Most realize that open interfaces inherently open more attack surfaces, but this is not specific to Open RAN, every cloud based architecture has the same drawback. Security by design goes a long way towards alleviating these concerns and proper no trust architecture can in many cases provide a higher security posture than legacy implementations. In this case, extensive use of IPSec, TLS 1.3, certificates at the interfaces and port levels for open front haul and management plane provide the necessary level of security, together with the mTLS interface between the RICs. The O-Cloud layer must support Linux security features, secure storage, encrypted secrets with external storage and management system.

CaaS:

As per last year, the cloud native infrastructure requirements have been refined, including Hardware Accelerator (GPU, eASIC) K8 support, Block and Object Storage for dedicated and hyper converged deployments, etc... Kubernetes infrastructure discovery, deployment, lifecycle management and cluster configuration has been further detailed. Power saving specific requirements have been added, at the Fan, CPU level with SMO driven policy and configuration and idle mode power down capabilities.

CU / DU:

CU DU interface requirements remain the same, basically the support for all open RAN interfaces (F1, HLS, X2, Xn, E1, E2, O1...). The support for both look aside and in-line accelerator architecture is also the highest priority, indicating that operators havent really reached a conclusion for a preferable architecture and are mandating both for flexibility's sake (In other words, inline acceleration hasn't convinced them that it can efficiently (cost and power) replace look aside). Fronthaul ports must support up to 200Gb by 12 x 10/25Gb combinations and mid haul up to 2 x 100Gb. Energy efficiency and consumption is to be reported for all hardware (servers, CPUs, fans, NIC cards...). Power consumption targets for D-RAN of 400Watts at 100% load for 4T4R and 500 watts for 64T64R are indicated. These targets seem optimistic and poorly indicative of current vendors capabilities in that space.

O-RU:

The radio situation is still messy and my statements from last year still mostly stand: "While all operators claim highest urgent priority for a variety of Radio Units with different form factors (2T2R, 2T4R, 4T4R, 8T8R, 32T32R, 64T64R) in a variety of bands (B1, B3, B7, B8, B20, B28B, B32B/B75B, B40, B78...) and with multi band requirements (B28B+B20+B8, B3+B1, B3+B1+B7), there is no unanimity on ANY of these. This leads vendors trying to find which configurations could satisfy enough volume to make the investments profitable in a quandary. There are hidden dependencies that are not spelled out in the requirements and this is where we see the limits of the TIP exercise. Operators cannot really at this stage select 2 or 3 new RU vendors for an open RAN deployment, which means that, in principle, they need vendors to support most, if not all of the bands and configurations they need to deploy in their respective network. Since each network is different, it is extremely difficult for a vendor to define the minimum product line up that is necessary to satisfy most of the demand. As a result, the projections for volume are low, which makes the vendors only focus on the most popular configurations. While everyone needs 4T4R or 32T32R in n78 band, having 5 vendor providing options for these configurations, with none delivering B40 or B32/B75 makes it impossible for operators to select a single vendor and for vendors to aggregate sufficient volume to create a profitable business case for open RAN." This year, there is one configuration of high priority that has unanimous support: 4T4R B3+B1. The other highest priority configurations requested by a majority of operators are 2T4R B28B+B20+B8, 4T4R B7, B3+B1, B32B+B75B, and 32T32R B78 with various power targets from 200 to 240W.

Open Front Haul:

The Front Haul interface requirements only acknowledge the introduction of Up Link enhancements for massive MIMO scenarios as they will be introduced to the 7.2.x specification, with a lower priority. This indicates that while Ericsson's proposed interface and architecture impact is being vetted, it is likely to become an optional implementation, left to the vendor's s choice until / unless credible cost / performance gains can be demonstrated.

Transport:

Optical budgets and scenarios are now introduced.

RAN features:

Final MoU positions are now proposed. Unanimous items introduced in this version revolve mostly around power consumption and efficiency counters, KPIs and mechanisms. other new requirements introduced follow 3GPP rel 16 and 17 on carrier aggregation, slicing and MIMO enhancements.

Hardware acceleration:

a new section introduced to clarify the requirements associated with L1 and L2 use of look aside and inline. The most salient requirement is for multi RAT 4G/5G simultaneous support.

Near RT RIC:

The Near Real Time RIC requirements continue to evolve and be refined. My perspective hasn't changed on the topic. and a detailed analysis can be found here. In short letting third party prescribe policies that will manipulate the DU's scheduler is anathema for most vendors in the space and, beyond the technical difficulties would go against their commercial interests. operators will have to push very hard with much commercial incentive to see xapps from 3rd party vendors being commercially deployed.

E2E use cases:

End-to-end use cases are being introduced to clarify the operators' priorities for deployments. There are many  but offer a good understanding of their priorities. Traffic steering for dynamic traffic load balancing, QoE and QoS based optimization, to optimize resource allocation based on a desired quality outcome... RAN sharing, Slice assurance, V2x, UAV, energy efficiency... this section is a laundry list of desiderata , all mostly high priority, showing here maybe that operators are getting a little unfocused on what real use cases they should focus on as an industry. As a result, it is likely that too many priorities result in no priority at all.

SMO

With over  260 requirements, SMO and non RT RIC is probably a section that is the most mature and shows a true commercial priority for the big 5 operators.

All in all, the document provides a good idea of the level of maturity of Open RAN for the the operators that have been supporting it the longest. The type of requirements, their prioritization provides a useful framework for vendors who know how to read them.

More in depth analysis of Open RAN and the main vendors in this space is available here.


Tuesday, January 9, 2024

HPE acquires Juniper Networks


On January 8, the first rumors started to emerge that HPE was entering final discussions to acquire Juniper Networks for $13b. By January 9th, HPE announced that they have entered into a definitive agreement for the acquisition.

Juniper Networks, known for its high-performance networking equipment, has been a significant player in the networking and telecommunications sector. It specializes in routers, switches, network management software, network security products, and software-defined networking technology. HPE, on the other hand, has a broad portfolio that includes servers, storage, networking, consulting, and support services.

 The acquisition of Juniper Networks by HPE could be a strategic move to strengthen HPE’s position in the networking and telecommunications sector, diversify its product offerings, and enhance its ability to compete with other major players in the market such as Cisco.

Most analysis I have read so far have pointed out AIOps and Mist AI as the core thesis for acquisition, enabling HPE to bridge the gap between equipment vendor and solution vendor, particularly in the Telco space.

While this is certainly an aspect of the value that Juniper Networks would provide to HPE, I believe that the latest progress from Juniper Networks in Telco beyond transport, particularly as an early leader in the emerging field of RAN Intelligence and SMO (Service Management and Orchestration) was a key catalyst in HPE's interest.

After all, Juniper Networks has been a networking specialist and leader for a long time, from SDN, SD WAN, Optical to data center, wired and wireless networks. While the company has been making great progress there, gradually virtualizing  and cloudifying its routers, firewalls and gateway functions, no revolutionary technology has emerged there until the application of Machine Learning and predictive algorithms to the planning, configuration, deployment and management of transport networks.

What is new as well, is Juniper Networks' efforts to penetrate the telco functions domains, beyond transport. The key area ready for disruption has been the Radio Access Network (RAN), specifically with Open RAN becoming an increasingly relevant industry trend to pervade networks architecture, culminating with AT&T's selection of Ericsson last month to deploy Open RAN for $14B.

Open RAN offers disaggregation of the RAN, with potential multivendor implementations, benefitting from open standard interfaces. Juniper Networks, not a traditional RAN vendor, has been quick to capitalize on its AIOps expertise by jumping on the RAN Intelligence marketspace, creating one of the most advanced RAN Intelligent Controller (RIC) in the market and aggressively integrating with as many reputable RAN vendors as possible. This endeavor, opening up the multi billion $ RAN and SMO markets is pure software and heavily biased towards AI/ML for automation and prediction.

HPE has been heavily investing in the telco space of late, becoming a preferred supplier of Telco CaaS and Cloud Native Functions (CNF) physical infrastructures. What HPE has not been able to do, is creating software or becoming a credible solutions provider / integrator. The acquisition of Juniper Networks could help solve this. Just like Broadcom's acquisition of VMWare (another early RAN Intelligence leader), or Cisco's acquisition of Accedian, hardware vendors yearn to go up the value chain by acquiring software and automation vendors, giving them the capacity to provide integrated end to end solutions and to achieve synergies and economy of scale through vertical integration.

The playbook is not new, but this potential acquisition could signal a consolidation trend in the telecommunications and networking industry, suggesting a more competitive landscape with fewer but larger players. This could have far-reaching implications for customers, suppliers, and competitors alike.


Monday, December 4, 2023

Is this the Open RAN tipping point: AT&T, Ericsson, Fujitsu, Nokia, Mavenir


The latest publications around Open RAN deliver a mixed bag of progress and skepticism. How to interpret these conflicting information?

A short retrospective of the most recent news:

On the surface, Open RAN seems to be benefiting from a strong momentum and delivering on its promise of disrupting traditional RAN with the introduction of new suppliers, together with the opening of traditional architecture to a more disaggregated and multi vendor model. The latest announcement from AT&T and Ericsson even would point out that the promise of reduced TCO for brownfield deployments is possible:
AT&T's yearly CAPEX guidance is supposed to reduce from a high of ~$24B to about 20B$ per year starting in 2024. If the 14B$ for 5 years spent on Ericsson RAN yield the announced 70% of traffic on Open RAN infrastructure, AT&T might have dramatically improved their RAN CAPEX with this deal.

What is driving these announcements?

For network operators, Open RAN has been about strategic supply chain diversification. The coalescence of the market into an oligopoly, and a duopoly after the exclusion of Chinese vendors to a large number of Western Networks has created an unfavorable negotiating position for the carriers. The business case of 5G relies heavily on declining costs or rather a change in the costs structure of deploying and operating networks. Open RAN is an element of it, together with edge computing and telco clouds.

For operators

The decision to move to Open RAN is mostly not any longer up for debate. While the large majority of brownfield networks will not completely transition to Open RAN they will introduce the technology, alongside the traditional architecture, to foster cloud native networks implementations. It is not a matter of if but a matter of when.
When varies for each market / operator. Operators do not roll out a new technology just because it makes sense even if the business case is favorable. A window of opportunity has to present itself to facilitate the introduction of the new technology. In the case of Open RAN, the windows can be:
  • Generational changes: 4G to 5G, NSA to SA, 5G to 6G
  • Network obsolescence: the RAN contracts are up for renewal, the infrastructure is aging or needs a refresh. 
  • New services: private networks, network slicing...
  • Internal strategy: transition to cloud native, personnel training, operating models refresh
  • Vendors weakness: Nothing better than an end of quarter / end of year big infrastructure bundle discount to secure and alleviate the risks of introducing new technologies

For traditional vendors

For traditional vendors, the innovator dilemma has been at play. Nokia has endorsed Open RAN early on, with little to show for it until recently, convincingly demonstrating multi vendor integration and live trials. Ericsson, as market leader has been slower to endorse Open RAN has so far adopted it selectively, for understandable reasons.

For emerging vendors

Emerging vendors have had mixed fortunes with Open RAN. The early market leader, Altiostar was absorbed by Rakuten which gave the market pause for ~3 years, while other vendors caught up. Mavenir, Samsung, Fujitsu and others offer credible products and services, with possible multi vendors permutations. 
Disruptors, emerging and traditional vendors are all battling in RAN intelligence and orchestration market segment, which promises to deliver additional Open RAN benefits (see link).


Open RAN still has many challenges to circumvent to become a solution that can be adopted in any network, but the latest momentum seem to show progress for the implementation of the technology at scale.
More details can be found through my workshops and advisory services.



Tuesday, November 7, 2023

What's behind the operators' push for network APIs?

 


As I saw the latest announcements from GSMA, Telefonica and Deutsche Telekom, as well as the asset impairment from Ericsson on Vonage's acquisition, I was reminded of the call I was making three years ago for the creation of operators platforms.

One one hand, 21 large operators (namely, America Movil, AT&T, Axiata, Bharti Airtel, China Mobile, Deutsche Telekom, e& Group, KDDI, KT, Liberty Global, MTN, Orange, Singtel, Swisscom, STC, Telefónica, Telenor, Telstra, Telecom Italia (TIM), Verizon and Vodafone) within the GSMA launch an initiative to open their networks to developers with the launch of 8 "universal" APIs (SIM Swap, Quality on Demand, Device Status, Number Verification, Simple Edge Discovery, One Time Password SMS, Carrier Billing – Check Out and Device Location). 

Additionally, Deutsche Telekom was first to pull the trigger on the launch of their own gateway "MagentaBusiness API" based on Ericsson's depreciated asset. The 3 APIs launched are Quality-on-demand, Device Status – Roaming and Device Location, with more to come.

Telefonica, on their side launched shortly after DT their own Open Gateway offering with 9 APIs (Carrier Billing, Know your customer, Number verification, SIM Swap, QOD, Device status, Device location, QOD wifi and blockchain public address).

On the other hand, Ericsson wrote off 50% of the Vonage acquisition, while "creating a new market for exposing 5G capabilities through network APIs".

Dissonance much? why are operators launching network APIs in fanfare and one of the earliest, largest vendor in the field reporting asset depreciation while claiming a large market opportunity?

The move for telcos to exposing network APIs is not new and has had a few unsuccessful aborted tries (GSMA OneAPI in 2013, DT's MobiledgeX launch in 2019). The premises have varied over time, but the central tenet remains the same. Although operators have great experience in rolling out and operating networks, they essentially have been providing the same connectivity services to all consumers, enterprises and governmental organization without much variation. The growth in cloud networks is underpinned by new generations of digital services, ranging from social media, video streaming for consumers and cloud storage, computing, CPaaS and IT functions cloud migration for enterprises. Telcos have been mostly observers in this transition, with some timid tries to participate, but by and large, they have been quite unsuccessful in creating and rolling out innovative digital services. As Edge computing and Open RAN RIC become possibly the first applications forcing telcos to look at possible hyperscaler tie-ins with cloud providers, it raises several strategic questions.

Telcos have been using cloud fabric and porting their vertical, proprietary systems to cloud native environment for their own benefit. As this transition progresses, there is a realization that private networks growth are a reflection of enterprises' desire to create and manage their connectivity products themselves. While operators have been architecting and planning their networks for network slicing, hoping to sell managed connectivity services to enterprises, the latter have been effectively managing their connectivity, in the cloud and in private networks themselves without the telcos' assistance. This realization leads to an important decision: If enterprises want to manage their connectivity themselves and expand that control to 5G / Cellular, should Telcos let them and if yes, by what means?

The answer is in network APIs. Without giving third party access to the network itself, the best solution is to offer a set of controlled, limited, tools that allow to discover, reserve and consume network resources while the operator retains the overall control of the network itself. There are a few conditions for this to work. 

The first, is essentially the necessity for universal access. Enterprises and developers have gone though the learning curve of using AWS, Google cloud and Azure tools, APIs and semantic. They can conceivably see value in learning a new set with these Telco APIs, but wont likely go through the effort if each Telco has a different set in different country.

The second, and historically the hardest for telcos is to create and manage an ecosystem and developer community. They have tried many times and in different settings, but in many cases have failed, only enlisting friendly developers, in the form of their suppliers and would be suppliers, dedicating efforts to further their commercial opportunities. The jury is still out as to whether this latest foray will be successful in attracting independent developers.

The third, and possibly the most risky part in this equation, is which APIs would prove useful and whether the actual premise that enterprises and developers will want to use them is untested. Operators are betting that they can essentially create a telco cloud experience for developers more than 15 years after AWS launched, with less tools, less capacity to innovate, less cloud native skills and a pretty bad record in nurturing developers and enterprises.

Ericsson's impairment of Vonage probably acknowledges that the central premise that Telco APIs are desirable is unproven, that if it succeeds, operators will want to retain control and that there is less value in the platform than in the APIs themselves (the GSMA launch on an open source platform essentially directly depreciates the Vonage acquisition).

Another path exist, which provides less control (and commercial upside) for Telcos, where they would  host third party cloud functions in their networks, even allowing third party cloud infrastructure (such as Amazon Outpost for instance) to be collocated in their data centers. This option comes with the benefit of an existing ecosystem, toolset, services and clients, just extending the cloud to the telco network. The major drawback is that the telco accepts their role as utility provider of connectivity with little participation in the service value creation.

Both scenarios are being played out right now and both paths represent much uncertainty and risks for operators that do not want to recognize the strategic implications of their capabilities.


Monday, September 25, 2023

Is Ericsson's Open RAN stance that open?

 

An extract from the Open RAN RIC and Apps report and workshop.

Ericsson is one of the most successful Telecom Equipment Manufacturers of all time, having navigated market concentration phases, the emergence of powerful rivals from China and elsewhere, and the pitfalls of the successive generations and their windows of opportunity for new competitors to emerge.

With a commanding estimated global market share of 26.9% (39% excluding China) in RAN, the company is the uncontested leader in the space. While the geopolitical situation and the ban of Chinese vendors in many western markets has been a boon for the company’s growth, Open RAN has become the largest potential threat to their RAN business.

At first skeptical (if not outright hostile) to the new architecture, the company has been keeping an eye on its development and traction over the last years and has formulated a cautious strategy to participate and influence its development.

In 2023, Ericsson seems to have accepted that Open RAN is likely to stay and represents both a threat and opportunity for its telecom business. The threat is of course on the RAN infrastructure business, and while the company has been moving to cloud ran, virtualizing and containerizing its software, the company still in majority ships vertical, fully integrated base stations.

When it comes to Open RAN, the company seems to get closer to embracing the concept, with conditions.

Ericsson has been advocating that the current low layer split 7.2.x is not suitable for massive MIMO and high capacity 5G systems and is proposing an alternative fronthaul interface to the O-RAN alliance. Cynics might say this is a delaying tactic, as other vendors have deployed massive MIMO on 7.2.x in the field, but as market leader, Ericsson has some strong datasets to bring to the conversation and contest the suitability of the current implementation. Ericsson is now publicly endorsing Open RAN architecture and, having virtualized its RAN software, will offer a complete solution, with O-RU, vDU,.vCU, SMO and Non-RT RIC . The fronthaul interface will rely on the recently proposed fronthaul and the midhaul will remain the F1 3GPP interface.

On the opportunity front, while most Ericsson systems usually ship with an Element Management System (EMS), which can be integrated into a Management and Orchestration (MANO) or Service Management and Orchestration (SMO) framework, the company has not entirely dominated this market segment and Open RAN, in the form of SMO and Non-RT RIC represent an opportunity to grow in the strategic intelligence and orchestration sector.

Ericsson is using the market leader playbook to its advantage. First rejecting Open RAN as immature, not performing and not secure, then admitting that it can provide some benefits in specific conditions, and now embracing it with very definite caveats.

The front haul interface proposal by the company seems self-serving, as no other vendor has really raised the same concerns in terms of performance and indeed commercial implementations have been observed with performance profiles comparable to traditional vendors.

The Non-RT RIC and rApp market positioning is astute and allows Ericsson simultaneously to claim support for Open RAN and to attack the SMO market space with a convincing offer. The implementation is solid and reflects Ericsson’s high industrialization and quality practice. It will doubtless offer a mature implementation of SMO / Non-RT RIC and rApps and provide a useful set of capabilities for operators who want to continue using Ericsson RAN with a higher instrumentation level. The slow progress for 3rd party integration both from a RIC and Apps perspective is worrisome and could be either the product of the company quality and administrative processes or a strategy to keep the solution fairly closed and Ericsson-centric, barring a few token 3rd party integrations.


Friday, September 18, 2020

Rakuten: the Cloud Native Telco Network

Traditionally, telco network operators have only collaborated in very specific environments; namely standardization and regulatory bodies such as 3GPP, ITU, GSMA...

There are a few examples of partnerships such as Bridge Alliance or BuyIn mostly for procurement purposes. When it comes to technology, integration, product and services development, examples have been rare of one carrier buying another's technology and deploying it in their networks.

It is not so surprising, if we look at how, in many cases, we have seen operators use their venture capital arm to invest in startups that end up rarely being used in their own networks. One has to think that using another operator's technology poses even more challenges.

Open source and network disaggregation, with associations like Facebook's Telecom Infra Project, the Open Networking Foundation (ONF) or the Linux Foundation O-RAN alliance have somewhat changed the nature of the discussions between operators.

It is well understood that the current oligopolistic situation in terms of telco networks suppliers is not sustainable in terms of long term innovation and cost structure. The wound is somewhat self-inflicted, having forced vendors to merge and acquire one another in order to be able to sustain the scale and financial burden of surviving 2+ years procurement processes with drastic SLAs and penalties.

Recently, these trends have started to coalesce, with a renewed interest for operators to start opening up the delivery chain for technology vendors (see open RAN) and willingness to collaborate and jointly explore technology development and productization paths (see some of my efforts at Telefonica with Deutsche Telekom and AT&T on network disaggregation).

At the same time, hyperscalers, unencumbered by regulatory and standardization purview have been able to achieve global scale and dominance in cloud technology and infrastructure. With the recent announcements by AWS, Microsoft and Google, we can see that there is interest and pressure to help network operators achieving cloud nativeness by adopting the hyperscalers models, infrastructure and fabric.

Some operators might feel this is a welcome development (see Telefonica O2 Germany announcing the deployment of Ericsson's packet core on AWS) for specific use cases and competitive environments. 

Many, at the same time are starting to feel the pressure to realize their cloud native ambition but without hyperscalers' help or intervention. I have written many times about how telco cloud networks and their components (Openstack, MANO, ...) have, in my mind, failed to reach that objective. 

One possible guiding light in this industry over the last couple of years has been Rakuten's effort to create, from the ground up, a cloud native telco infrastructure that is able to scale and behave as a cloud, while providing the proverbial telco grade capacity and availability of a traditional network. Many doubted that it could be done - after all, the premise behind building telco clouds in the first place was that public cloud could never be telco grade.

It is now time to accept that it is possible and beneficial to develop telco functions in a cloud native environment.

Rakuten's network demonstrates that it is possible to blend traditional and innovative vendors from the telco and cloud environments to produce a cloud native telco network. The skeptics will say that Rakuten has the luxury of a greenfield network, and that much of its choices would be much harder in a brownfield environment.




The reality is that whether in the radio, the access, or the core, in OSS or BSS, there are vendors now offering cloud native solutions that can be deployed at scale with telco-grade performance. The reality as well is that no all functions and not all elements are cloud native ready. 

Rakuten has taken the pragmatic approach to select from what is available and mature today, identifying gaps with their ideal end state and taking decisive actions to bridge the gaps in future phases.




Between the investment in Altiostar, the acquisition of Innoeye and the joint development of a cloud native 5G Stand Alone Core with NEC, Rakuten has demonstrated vision clarity, execution and commitment to not only be the first cloud native telco, but also to be the premier cloud native telco supplier with its Rakuten Mobile Platform. The latest announcement of a MoU with Telefonica could be a strong market signal that carrieres are ready to collaborate with other carriers in a whole new way.


Friday, May 8, 2020

What are today's options to deploy a telco cloud?

Over the last 7 years, we have seen leading telcos embracing cloud technology as a mean to create an elastic, automated, resilient and cost effective network fabric. There has many different paths and options from a technological, cultural and commercial perspective.

Typically, there are 4 categories of solutions telcos have been exploring:

  • Open source-based implementation, augmented by internal work
  • Open source-based implementation, augmented by traditional vendor
  • IT / traditional vendor semi proprietary solution
  • Cloud provider solution


The jury is still out as to which option will prevail, as they all have seen growing pains and setbacks.

Here is a quick cheat sheet of some possibilities, based on your priorities:



Obviously, this table changes quite often, based on progress and announcements of the various players, but it can come handy if you want to evaluate, at high level, what are some of the options and pros / cons of deploying one vendor or open source project vs another.

Details, comments are part of my workshops and report on telco edge and hybrid cloud networks.

Tuesday, January 28, 2020

Announcing telco edge computing and hybrid cloud report 2020


As I am ramping up towards the release of my latest report on telco edge computing and hybrid cloud, I will be releasing some previews. Please contact me privately for availability date, price and conditions.

In the 5 years since I published my first report on the edge computing market, it has evolved from an obscure niche to a trendy buzzword. What originally started as a mobile-only technology, has evolved into a complex field, with applications in IT, telco, industry and clouds. While I have been working on the subject for 6 years, first as an analyst, then as a developer and network operator at Telefonica, I have noticed that the industry’s perception of the space has polarized drastically with each passing year.

The idea that telecom operators could deploy and use a decentralized computing fabric throughout their radio access has been largely swept aside and replaced by the inexorable advances in cloud computing, showing a capacity to abstract decentralized computing capacity into a coherent, easy to program and consume data center as a service model.

As often, there are widely diverging views on the likely evolution of this model:

The telco centric view

Edge computing is a natural evolution of telco networks. 
5G necessitates robust fibre based backhaul transport.With the deployment of fibre, it is imperative that the old copper commuting centers (the central offices) convert towards multi-purposes mini data centers. These are easier and less expensive to maintain than their traditional counterpart and offer interesting opportunities to monetize unused capacity.

5G will see a new generation of technology providers that will deploy cloud native software-defined functions that will help deploy and manage computing capabilities all the way to the fixed and radio access network.

Low-risk internal use cases such as CDN, caching, local breakout, private networks, parental control, DDOS detection and isolation, are enough to justify investment and deployment. The infrastructure, once deployed, opens the door to more sophisticated use cases and business models such as low latency compute as a service, or wholesale high performance localized compute that will extend the traditional cloud models and services to a new era of telco digital revenues.

Operators have long run decentralized networks, unlike cloud providers who favour federated centralized networks, and that experience will be invaluable to administer and orchestrate thousands of mini centers.

Operators will be able to reintegrate the cloud value chain through edge computing, their right-to-play underpinned by the control and capacity to program the last mile connectivity and the fact that they will not be over invested by traditional public clouds in number and capillarity of data centers in their geography (outside of the US).

With its long-standing track record of creating interoperable decentralized networks, the telco community will create a set of unifying standards that will make possible the implementation of an abstraction layer across all telco to sell edge computing services irrespectively of network or geography.

Telco networks are managed networks, unlike the internet, they can offer a programmable and guaranteed quality of service. Together with 5G evolution such as network slicing, operators will be able to offer tailored computing services, with guaranteed speed, volume, latency. These network services will be key to the next generation of digital and connectivity services that will enable autonomous vehicles, collaborating robots, augmented reality and pervasive AI assisted systems.

The cloud centric view:

Edge computing, as it turns out is less about connectivity than cloud, unless you are able to weave-in a programmable connectivity. 
Many operators have struggled with the creation and deployment of a telco cloud, for their own internal purposes or to resell cloud services to their customers. I don’t know of any operator who has one that is fully functional, serving a large proportion of their traffic or customers, and is anywhere as elastic, economic, scalable and easy to use as a public cloud.
So, while the telco industry has been busy trying to develop a telco edge compute infrastructure, virtualization layer and platform, the cloud providers have just started developing decentralized mini data centers for deployment in telco networks.

In 2020, the battle to decide whether edge computing is more about telco or about cloud is likely already finished, even if many operators and vendors are just arming themselves now.

Edge computing, to be a viable infrastructure-based service that operators can resell to their customers needs a platform, that allows third party to discover, view, reserve and consume it on a global scale, not operator per operator, country per country, and it looks like the telco community is ill-equipped for a fast deployment of that nature.


Whether you favour one side or the other of that argument, the public announcements in that space of AT&T, Amazon Web Services, Deutsche Telekom, Google, Microsoft, Telefonica, Vapour.io and Verizon – to name a few –will likely convince you that edge computing is about to become a reality.

This report analyses the different definitions and flavours of edge computing, the predominant use cases and the position and trajectory of the main telco operators, equipment manufacturers and cloud providers.

Monday, July 22, 2013

Pay TV vs. OTT part V: appointment vs. on-demand


The recent emergence of LTE broadcast and eMBMS has prompted many companies to bet much R&D and marketing dollar on the resurgence of the mobile TV model. 

I have trouble believing that many mobile users will be tuning-in "en masse" at regular appointment to watch their favorite show on a mobile device. 
There is nothing wrong with Pay TV, its audience is stable-ish and while most would see OTT services compete for these eyeballs, I see them as a more complementary play. Pay TV is here to stay and I do not see cord cutting as a credible threat in the short term, more cord shaving or cord picking.

Many have been developing and promoting mobile TV models in the past either through broadcast or unicast technologies. The long defunct services from Qualcomm (MediaFLO) and DVB-H should serve as cautionary tales to those who are betting on the next generation of broadcast services. 

Many fail to understand that mobile TV is not attractive to most people in many circumstances. If you are like me, you will watch TV programs, by order of convenience:

  1. When I want, at home, on my PVR (so I can skip the ads)
  2. Live, at home, when it is time sensitive content ( news, sport event, ceremony...)
  3. At a bar, live, when I want to watch sport live with friends or strangers
  4. On a tablet at home (wifi) when I want to watch something else/more than the main screen
  5. On a tablet at hotel /airport (wifi) ...etc... when I want to watch premium content catch up
  6. On a phone / tablet (cellular) if there is no other choice

Don't get me wrong I watch a good amount of video on mobile, just not TV programs. I remember living in Switzerland some 10 years ago and having one of the first video phones
that would perform video calls and stream mobile TV. Past the novelty aspect, no one was watching TV on their phone then, and it wasn't due to network capacity or video quality. Having a video phone then was seriously cool but that did not take away the fact that the TV content I wanted to watch was not available when I wanted to watch it. My Sonyericsson K600 (remember?) joined my first Smartphone (
Philips Ilium, I designed it at Philipsand my first MMS phone (ericsson T68i) in my private museum together with my first PDA (I sent the world's first picture message on a CDMA iPAQ in 2002). 

This is mostly due to the fact that TV is an appointment experience. I like to be comfortable watching TV because I watch only very specific programs. When I sit down to watch TV, I mostly know beforehand what I will watch. The videos I watch on mobile are not necessarily only short form content but I don't mind being interrupted as much because in my mind, it is mostly light entertainment that does not require concentration nor continuity. It is also mostly serendipitous in nature, I do not necessarily plan what I will watch in advance. 
I know that my children and their elder's behavior is similar. They might watch more long form content on their mobile than me but they are mostly not watching TV content. 
While some see broadcast as a means to considerably reduce video load on mobile networks, I think they are missing the point. TV by appointment is a very small portion of the preferred usage, for very specific content, in very specific circumstances. Broadcast TV on mobile makes very little sense apart from niche usage (stadiums,...). 

I don't think that because LTE offers  better network capacity, higher speeds, better quality pictures it will make a better mobile TV service. Don't think for a second that subscribers will pay more than a couple of bucks per month (if anything) to have a TV experience on mobile. People pay for quality, relevance and immediacy on mobile, not the best attributes for broadcast. So before you think about "monetizing" my mobile TV experience, think hard because I won't pay for TV broadcast on mobile.

If you haven't read the other posts in this series, you can find them here for context.
Pay TV vs. OTT:
Part I: The business models
Part II: Managed devices and services vs. OTT
Part III: CE vendors and companion screens
Part IV: Clash of the titans



Monday, March 5, 2012

NSN buoyant on its liquid net

I was with Rajeev Suri, CEO of NSN, together with about 150 of my esteemed colleagues from the press and analyst community on February 26 at Barcelona's world trade center drinking NSN's Kool Aid for 2012. As it turns out, the Liquid Net is not hard to swallow.

The first trend highlighted is about big data, big mobile data that is. NSN's prediction is that by 2020, consumers will use 1GB per day on mobile networks.
When confronted with these predictions, network operators have highlighted 5 challenges:
  1. Improve network performances (32%)
  2. Address decline in revenue (21%)
  3. Monetize the mobile internet (21%)
  4. Network evolution (20%)
  5. Win in new competitive environment (20%)
Don't worry if the total is more than 100%, either it is was a multiple choice questionnaire or NSN's view is that operators are very preoccupied.

Conveniently, these challenges are met with 5 strategies (hopes) that NSN can help with:

  1. Move to LTE
  2. Intelligent networks and capacity
  3. Tiered pricing
  4. Individual experience
  5. Operational efficiency
And this is what has been feeding the company in the last year, seeing sales double to 14B euros in 2011 and turning an actual operating profit of 225m euros. The CEO agrees that NSN is not back yet and more divestment and redundancies are planned (8,500 people out of 17,000 will leave) for the company to reach its long term target of 10% operating profit. NSN expects its LTE market share to double in 2012.

Liquid Net
Liquid networks is the moniker chosen by NSN to answer to the general anxiety surrounding data growth and revenue shrinkage. It promises 1000 times more capacity by 2020 (yes, 1000) and the very complex equation to explain the gain is as follow: 10x more cell sites (figures...), 10 times more spectrum and 10 times more efficiency.

The example chosen to illustrate Liquid net, was I think, telling. NSN has deployed its network at an operator in the UK where it famously replaced Ericsson last summer. It has been able since to detect devices and capabilities and adapt video resolutions with Netflix for instance that resulted in 50% less engorgement in some network conditions. That is hard to believe. Netflix being encrypted, I was scratching my head trying to understand how a lossless technique could reach these numbers.
The overall savings claimed for implementing liquid networks were 65% capacity increase, 30% coverage gain and 35% reduction in TCO.

Since video delivery in mobile networks is a bit of a fixation of mine, I decided to dig up more into these extraordinary claims. I have to confess my skepticism at the outset. I am familiar with NSN, having dealt with the company as a vendor for the last 15 years and am more familiar with its glacial pace of innovation in core networks.

I have to say, having gone through a private briefing, presentation and demonstration, I was surprised by the result. I am starting to change my perspective on NSN and so should you. To find out why and how, you will need to read the write up in my report.

Monday, November 21, 2011

Video optimization 2.0, market reset

On the heels of broadband traffic management's show in London last week, I thought it was time to take stock of that market segment as most vendors have launched their second generation product recently.

The market leader, Bytemobile (with 55% market share of deployments), started the trend this summer, when launching their new dedicated appliance, the T-3000. While this is not strictly a new version of their Unison product, it is a new computing platform sold as an appliance, departing from the software infrastructure business model. It is a first step towards solving some of the scalability issues experienced by the former solution, that saw dpi, policy, charging, web and video optimization inextricably amalgamated, whether you wanted to use all products or not. It gets rid as well of these expensive load balancers that were a high cost low yield proposition. Bytemobile is not the only one to experience price pressure and to take the knife to load balancing as the bandwidth requirements increase.

Mobixell, with 16% market share, seems to be at last in a position to digest their 724 solutions acquisition. While both product lines were quite complementary and had little overlap, it was a tough proposition for Mobixell to acquire 724, rationalize the technologies and workforce and face the ire of their traditional resellers and OEM (NSN, Huawei, Ericsson...). These were weary to see their supplier compete head to head with them in mobile broadband as Mobixell was rolling out 724 seamless gateway proposition along with their streaming and transcoding platform. The result saw Mobixell practice a tough price attrition in the market, helped by a low cost structure (724 solutions technology comes with integrated routing and inter process UDP-based communication that provides great scalability at low cost). Mobixell announced the launch of the new product release, called EVO, taking some of the computational power to the cloud. While some are skeptical about how much can be accomplished in the cloud for real time video optimization, it certainly is a good step towards cost and CAPEX containment worth exploring.



Flash networks with 8% has been quite busy on the market, silently plowing ahead, upgrading existing customers and winning a handful of deals. They have announced the new version of their product and are as well taking a big step in technology investment in that space.




Ortiva wireless with 3% market share has seen some very good progress this year, bagging some good high profile accounts, nearly tripling their year on year revenue, from an admittedly small footprint. The company has not announced a new version of their product yet, staying on their existing appliance model.




Skyfire labs, with 2% market share, a very innovative start up with a cloud based approach, evolved from their tablet and smartphone browsing app has also been able to grab some high profile tier 1 carrier, together with high profile VAR agreement with infrastructure vendors.


Openwave, with 1% market share, as you know, has had a very busy year on the corporate and financial front (herehere, here, and here), but has not announced much from a product, technology or customer standpoint. They are fighting for their survival and seem to be focusing to a return to financial stability (PS revenue increase, licensing of their patents to Microsoft) before investing further in technology or customer acquisition.






NSN has been developing their homegrown technology, wanting to end the reliance on their traditional partners in the space and came out with a very basic first attempt, focused around loss-less transmission. Nowadays, they are trying to push their "liquid" network concept and seem to be going at it in a fairly scattered manner.

These new product announcements signal, beyond the usual technology investment from start ups and established vendors, a market reaching a level of maturity fast, only 2 years after inception. Some might even say that this segment is commoditized before having really taken off. According to my calculations, this is a market that has generated about $90 millions for vendors this year. We can see from the number of players why price attrition plays an important role, even though traffic is increasing fast. We will see some consolidation and attrition in that space soon, as insufficiently capitalized vendors wont be able to sustain the market growth.

RGB networks, Juniper, Cisco, Huawei, Acision are all active in this space too, while others are preparing to enter the market. The market share are {Core Analysis} calculations, part of an upcoming report on the mobile video optimization space. Details and questions can be addressed here or at patrick.lopez@coreanalysis.ca.

Monday, September 12, 2011

Openwave CEO replaced - Consolidations to come in the traffic management market

Openwave announced today the resignation of its CEO, Ken Denman, quoting personal reasons. Denman is being replaced by Anne Brennan, the company's CFO.

As we have seen in a previous post, Openwave has been struggling for a while to deliver on the expectations it has raised in the market to provide an integrated traffic management solution for video.

After failing to show the results on over 40 announced trials, after failing to upsell their installed base with their next generation of products, after after buying back old patents and suing RIM and Apple, Openwave sees its CEO resign and, the same day,  is nominating Peter Feld as Chairman of the Board, replacing Charles E. Levine.

This market segment, born from the ashes of the wap gateway market, sees companies like Acision, Bytemobile, Comverse,  Ericsson, Flash Networks, Huawei, Mobixell, Nokia Siemens Networks, and others become the intelligent gateway in the network. That gateway's role is to complement and orchestrate DPI, charging, PCRF, video optimization. It is a key network function.

As most data traffic is browsing related, companies that used to sell wap gateway are the best positioned to capitalize on upselling a richer, more sophisticated gateway that can provide means for operators to control, monetize and optimize browsing and video traffic in their network.

Openwave has not been able to negotiate that trend early enough to avoid its market share being eaten up by traditional competitors and new entrants. Additionally, as the traffic has fundamentally changed since tablets and smartphones have entered the market, key capabilities such as TCP, web and video optimization were late to appear in Openwave's roadmap and proved challenging to build rather than buy.

Mobixell started the consolidation with the acquisition of 724 solutions last year.
I bet we will see more consolidations soon.