Tuesday, January 28, 2020

Announcing telco edge computing and hybrid cloud report 2020


As I am ramping up towards the release of my latest report on telco edge computing and hybrid cloud, I will be releasing some previews. Please contact me privately for availability date, price and conditions.

In the 5 years since I published my first report on the edge computing market, it has evolved from an obscure niche to a trendy buzzword. What originally started as a mobile-only technology, has evolved into a complex field, with applications in IT, telco, industry and clouds. While I have been working on the subject for 6 years, first as an analyst, then as a developer and network operator at Telefonica, I have noticed that the industry’s perception of the space has polarized drastically with each passing year.

The idea that telecom operators could deploy and use a decentralized computing fabric throughout their radio access has been largely swept aside and replaced by the inexorable advances in cloud computing, showing a capacity to abstract decentralized computing capacity into a coherent, easy to program and consume data center as a service model.

As often, there are widely diverging views on the likely evolution of this model:

The telco centric view

Edge computing is a natural evolution of telco networks. 
5G necessitates robust fibre based backhaul transport.With the deployment of fibre, it is imperative that the old copper commuting centers (the central offices) convert towards multi-purposes mini data centers. These are easier and less expensive to maintain than their traditional counterpart and offer interesting opportunities to monetize unused capacity.

5G will see a new generation of technology providers that will deploy cloud native software-defined functions that will help deploy and manage computing capabilities all the way to the fixed and radio access network.

Low-risk internal use cases such as CDN, caching, local breakout, private networks, parental control, DDOS detection and isolation, are enough to justify investment and deployment. The infrastructure, once deployed, opens the door to more sophisticated use cases and business models such as low latency compute as a service, or wholesale high performance localized compute that will extend the traditional cloud models and services to a new era of telco digital revenues.

Operators have long run decentralized networks, unlike cloud providers who favour federated centralized networks, and that experience will be invaluable to administer and orchestrate thousands of mini centers.

Operators will be able to reintegrate the cloud value chain through edge computing, their right-to-play underpinned by the control and capacity to program the last mile connectivity and the fact that they will not be over invested by traditional public clouds in number and capillarity of data centers in their geography (outside of the US).

With its long-standing track record of creating interoperable decentralized networks, the telco community will create a set of unifying standards that will make possible the implementation of an abstraction layer across all telco to sell edge computing services irrespectively of network or geography.

Telco networks are managed networks, unlike the internet, they can offer a programmable and guaranteed quality of service. Together with 5G evolution such as network slicing, operators will be able to offer tailored computing services, with guaranteed speed, volume, latency. These network services will be key to the next generation of digital and connectivity services that will enable autonomous vehicles, collaborating robots, augmented reality and pervasive AI assisted systems.

The cloud centric view:

Edge computing, as it turns out is less about connectivity than cloud, unless you are able to weave-in a programmable connectivity. 
Many operators have struggled with the creation and deployment of a telco cloud, for their own internal purposes or to resell cloud services to their customers. I don’t know of any operator who has one that is fully functional, serving a large proportion of their traffic or customers, and is anywhere as elastic, economic, scalable and easy to use as a public cloud.
So, while the telco industry has been busy trying to develop a telco edge compute infrastructure, virtualization layer and platform, the cloud providers have just started developing decentralized mini data centers for deployment in telco networks.

In 2020, the battle to decide whether edge computing is more about telco or about cloud is likely already finished, even if many operators and vendors are just arming themselves now.

Edge computing, to be a viable infrastructure-based service that operators can resell to their customers needs a platform, that allows third party to discover, view, reserve and consume it on a global scale, not operator per operator, country per country, and it looks like the telco community is ill-equipped for a fast deployment of that nature.


Whether you favour one side or the other of that argument, the public announcements in that space of AT&T, Amazon Web Services, Deutsche Telekom, Google, Microsoft, Telefonica, Vapour.io and Verizon – to name a few –will likely convince you that edge computing is about to become a reality.

This report analyses the different definitions and flavours of edge computing, the predominant use cases and the position and trajectory of the main telco operators, equipment manufacturers and cloud providers.

Wednesday, January 22, 2020

vRAN, cRAN, ORAN... whats going on with the Telco Radio Network?

I have been doing more than a dozen due diligence engagements in the virtualized, cloud and disaggregated RAN market space in the last 2 months.

As much as many telco clouds implementation have been somewhat disappointing, due to the inability of the industry to force the traditional telecommunications vendors to adopt an open orchestration model, the Radio Access Networks (RAN) have been undergoing a similar transformation with a different outcome.

Based on the same premises that market pressures have forced telecommunications traditional vendors to oligopolistic consolidations, some operators have been trying to open up RAN value chain by forcing the implementation of a disaggregated model.
Specifically, separating hardware from software and deploying radio solutions on commercial-off-the-shelf (COTS) hardware and virtualizing software logical elements such as Radio Remote Units (RRU)  and Base Band Units (BBU).
The RAN is the last part of the network that sees heavy proprietary implementation, from hardware to software to professional services, and most contracts tend to be for key-in-hand study, implementation, installation and maintenance where margins are fairly opaque. This is the bread and butter of traditional telco vendors.

The idea is that if you force all vendors to move to software and you can source COTS hardware, you gain CAPEX cost efficiency (COTS is cheaper than proprietary) and if you virtualize all the software, you gain OPEX cost efficiency (you can ideally software -manage everything, which leads to on demand use and cost, as well as vendor independance if the interfaces are open).
Predictably, just like in the case of telco clouds, traditional vendors hesitated cannibalizing a multi billion annual revenue. Unlike in telco cloud, though, a number of operators (BT, Deutsche Telekom, Telefonica, Vodafone...) elected to show how serious they were about RAN cost optimization and decided to approach a number of emerging vendors.
These vendors, unlike traditional telco vendors provide an array of innovative solutions, that are usually cloud-native (software defined, virtualized, hardware agnostic) and are eager to attack the incumbents market.
In order to show a clear market commitment, Telefonica and Vodafone released two years ago and last year a public tender within Facebook's TIP summit. This process is remarkable in the telco space in the fact that the list of invited companies (any company part of TIP), the content of the RFI and the ranking of the participating vendors were publicly announced.

The results ranked the vendors along their performance, openness and time to market readiness. The sponsoring operators committed to clear timeframe for lab trials, field trials and commercial deployments.
The tender showed that an ecosystem of innovative vendors was capable to emerge and complement the current value chain with adapted solutions.
Generally speaking these emerging companies do not have to manage the legacy of hundreds of obsolete product releases, which allows them to have a much faster development cycle. This is important because in many case, they do not have the product depth of their more mature competition (2G, 3G, 4G, 5G, small, mini, macro cells). On the other hand they also often lack the operational maturity to manage a RAN deployment at scale for a tier one operator.

Cloud RAN (cRAN), Virtualized RAN (vRAN) and Open RAN (O RAN) are all concepts that have emerged in the last few years to describe this emerging ecosystem. While most operators would rather continue purchasing from their traditional vendors, to reduce operational and financial risks, the race to 5G and the pressure on margins and operators stock prices is forcing them to reevaluate the RAN value chain.
Open, open source and disaggregation are trendy topics in telco, but they come with a steep learning curve as they force those who want to follow this path to actually change their operating model if they want to extract the most value.
The market needs the emergence of a new category of open source distributors, a Red Hat of telco open source if you will, as well as a new category of systems integrators that can take the effort of assembling these new vendors categories into coherent solutions and services that traditional telco will want to purchase.

Hit me up if you want more details on that market space, I am preparing a workshop and report on this.






Wednesday, January 8, 2020

Open or open source?

For those who know me, you know that I have been a firm supporter of openness by design for a long time. It is important not to conflate openness and open source when it comes to telco strategy, though.

Most network operators believe that any iteration of their network elements must be fully interoperable within their internal ecosystem (their network) and their external ecosystem (other telco networks). This is fundamentally what allows any phone user to roam and use any mobile networks around the planet.
This need for interoperability has reinforced the importance of standards such as ETSI and 3GPP and forums such as GSMA over the last 20 years. This interoperability by design has led to the creation of rigid interfaces, protocols and datagrams that preside over how network elements should integrate and interface in a telco and IP network.
While this model has worked well for the purpose of creating a unified global aggregation of networks with 3G/4G, departing from the fragmentation of 2G (GSM, CDMA, TDMA, AMPS...), it has also somewhat slowed down and stifled the pace of innovation for network functions.

The last few years have seen an explosion of innovation in networks, stemming from the emergence of data centers, clouds, SDN and virtualization. The benefits have been incredible, ranging from departing from proprietary hardware dependency, increased multi tenancy, resource elasticity, traffic programmability, automation and ultimately the atomization of network functions into microservices. This allowed the creation of higher level network abstractions without the need for low level programming or coding (for more on this, read anything ever written by the excellent Simon Wardley). These benefits have been systematically developed and enjoyed by those companies that needed to scale their networks the fastest: the webscalers.

In the process, as the technologies underlying these new networks passed from prototype, to product, to service, to microservice, they have become commoditized. Many of these technologies, once close to maturity, have been open sourced, allowing a community of similarly interested developers to flourish and develop new products and services.

Telecom operators were inspired by this movement and decided that they needed as well to evolve their networks to something more akin to an elastic cloud in order to decorrelate traffic growth from cost. Unfortunately, the desire for interoperability and the lack of engineering development resources led operators to try to influence and drive the development of a telco open source ecosystem without really participating in it. NFV (Networks function Virtualization) and telco Openstack are good examples of great ideas with poor results.Let's examine why:

NFV was an attempt to separate hardware from software, and stimulate a new ecosystem of vendors to develop telco functions in a more digital fashion. Unfortunately, the design of NFV was a quasi literal transposition of appliance functions, with little influence from SDN or micro service architecture. More importantly, it relied on an orchestration function that was going to become the "app store" of the network. This orchestrator, to be really vendor agnostic would have to be fully interoperable with all vendors adhering to the standard and preferably expose open interfaces to allow interchangeability of the network functions, and orchestrator vendors. In practice, none of the traditional telecom equipment manufacturers had plans to integrate with a third party orchestrators and would try to deploy their own as a condition for deploying their network functions. Correctly identifying the strategic risk, the community of operators started two competing open source projects: Open Source Mano (OSM) and Open Network Automation Platform (ONAP).
Without entering into the technical details, both projects suffered at varying degree from a cardinal sin. Open source development is not a spectator sport. You do not create an ecosystem or a community of developer. You do not demand contribution, you earn it. The only way open source projects are successful is if their main sponsors actively contribute (code, not diagrams or specs) and if the code goes into production and its benefits are easily illustrated. In both cases, most operators have opted on relying heavily on third party to develop what they envisioned, with insufficient real life experience to ensure the results were up to the task. Only those who roll their sleeves and develop really benefit from the projects.

Openstack was, in comparison, already a successful ecosystem and open source development forum when telco operators tried to bend it to their purpose. It had been deployed in many industries, ranging from banking, insurances, transportation, manufacturing, etc... and had a large developer community. Operators thought that piggybacking on this community would accelerate development and of an OpenStack suited for telco operation. The first efforts were to introduce traditional telco requirements (high availability, geo redundancy, granular scalability...) into a model that was fundamentally a best effort IT cloud infrastructure management. As I wrote 6 years ago, OpenStack at that stage was ill-suited for the telco environment. And it remained so. Operators resisted hiring engineers and coding sufficient functions into OpenStack to make it telco grade, instead relying on their traditional telco vendors to do the heavy lifting for them.

The lessons here are simple.
If you want to build a network that is open by design, to ensure vendor independence, you need to manage the control layer yourself. In all likeliness, tring to specify it and asking others to build it for you will fail if you've never built one before yourself.
Open source can be a good starting point, if you want to iterate and learn fast, prototype and test, get smart enough to know what is mature, what can should be bought, what should be developed and where is differential value. Don't expect open source to be a means for others to do your labour. The only way you get more out of open source than you put in is a long term investment with real contribution, not just guidance and governance.