Monday, May 25, 2020

Why telco operators need a platform for edge computing


Initially published in The Mobile Network.

Extracted from the edge computing and hybrid cloud 2020 report.

Edge computing and hybrid clouds have become subjects of many announcements and acquisitions over the last months.
Edge computing, in order to provide a capacity for developers and third party to reserve and consume operators computing, storage and networking capacity need a platform. The object of this platform is to provide a web interface and series of APIs to abstract network topology and complexity and offer developers a series of cloud services and product to package within their offering. Beyond hyperscalers who have natively developed these platforms, a few vendors have emerged in the telco space, such as MobiledgeX and ORI Industries.
Network operators worldwide are confronted with the inexorable growth of their data traffic due to the consumers’ voracious appetite for video streaming and gaming. Since video content is the largest and fastest growing data type in the networks, an economical challenge is slowly arising. Data charging models have departed from per Megabyte metered billing to bundles and unlimited data, which encourages traffic growth, while reducing the operators’ capacity to monetize this growth. Consumers are not willing to pay much more for a HD video versus Standard Definition. For them, it is essentially the same service and the operator is to blame if the quality is not sufficient. Unfortunately, the problem is likely to accelerate with emerging media hungry video services relying on 4K, 8K and Augmented Reality. As a consequence, the average revenue per user stagnates in most mature markets, while costs continue to rise to increase networks capacity.
While 5G promises extraordinary data speeds, enough to complement or equal fibre fixed capacity, there is no real evidence that the retail consumer market will be willing to pay a premium for improved connectivity. If 5G goes the way of 4G, the social media, video streaming, gaming services and internet giants will be the ones profiting from the growth in digital services. The costs for deploying 5G networks will range in the low to double digit billions, depending on the market, so… who will foot the bill?
If properly executed, the 5G roll out will become in many markets the main broadband access at scale. As this transition occurs, new opportunities arise to bundle mobile connectivity with higher level services, but because the consumer market is unlikely to drastically change its connectivity needs in the short term, the enterprise market is the most likely growth opportunity for 5G in the short to medium term.
Enterprise themselves are undergoing a transformation, with the commoditization of cloud offering.
Cloud is one of the fastest growing ICT businesses worldwide, with IaaS the fastest growing segment. Most technology companies are running their business on cloud technology, be it private or public and many traditional verticals are now considering the transition.
Telecom operators have mostly lost the cloud battle - AWS, Microsoft, Google, Alibaba have been able to convert their global network of data centers into an elastic, on-demand as-a-service economy.
Edge computing, the deployment of mini data centers in telco networks promises to deliver a range of exciting new digital services. It may power remote surgery, self driving cars, autonomous industrial robots, drone swarms and countless futuristic applications.
In the short term, though, the real opportunity is for network operators to rejoin the cloud value chain, by providing a hyper local, secure, high performance, low latency edge cloud that will complement the public and private clouds deployed today.
Most private and public clouds ultimately stumble upon the “last mile” issue. Not managing the connectivity between the CPE, the on-premise data center and the remote data center means more latency, less control and more possibility for hacking or privacy issues.
Operators have a chance to partner with the developers’ community and provide them with a cloud flavour that extends and improve current public and private cloud capabilities.
The edge computing market is still emerging, with many different options in terms of location, distribution, infrastructure and management, but what is certain is that it will need to be more of a cloud network than a telco network if it succeeds in attracting developers.
Beyond the technical details that are being clarified by deployments and standards, the most important gap network operators need to bridge with a true cloud experience is the platform. Operators traditionally have deployed private cloud for their own purpose -  to manage their network. These clouds do not have all the traditional features we can expect from commercial public cloud (lifecycle management, third party authentication, reservation, fulfillment…). The key for network operators to capture the enterprise opportunity is to offer a set of APIs that are as simple as those from the public clouds, so that developers and enterprise may reserve, consume and pay for edge computing and connectivity workloads and pipelines.
A possible outcome of this need if operators do not open their private cloud to enterprises is that hyperscalers will expand their clouds to operators’ networks and provide these services to their developer and client community. This would mean that operators would be confined to a strict connectivity utility model, where traffic prices would inexorably decline due to competitive pressure and high margin services would be captured by the public cloud.
  • Edge computing can allow operators to offer IaaS and PaaS services to enterprises and developers with unparalleled performance compared to traditional clouds:
  • Ultra-low and guaranteed latency (typically between 3 -25ms between the CPE and the first virtual machine in the local cloud)
  • Guaranteed performance (up to 1Gps in fibre and 300Mbps in cellular)
  • Access to mobile edge computing (precise user location, authentication, payment, postpaid / prepaid, demographics… depending on operators’ available APIs)
  • Better than cloud, better than WIFI services and connectivity (storage, video production, remote desktop, collaboration, autonomous robots,…)
  • Flexible deployment and operating models (dedicated, multi-tenant…)
  • Local guaranteed data residency (legal, regulatory, privacy compliant)
  • Reduce cloud costs (data thinning and preprocessing before transfer to the cloud)
  • High performance ML and AI inferring
  • Real time guiding and configuration of autonomous systems


It is likely that many enterprise segments will want to benefit from this high-performance cloud. It is also unlikely that operators alone will be able to design products and services for every vertical and segment. Operators will probably focus on a few specific accounts and verticals, and cloud integration providers will rush in to enable specific market edge cloud and connectivity services:
  • Automotive
  • Transport
  • Manufacturing
  • Logistics
  • Retail
  • Banking and insurances
  • IoT
  • M2M…

Each of these already have connectivity value chain, where network operators are merely a utility provider for higher value services and products. Hybrid local cloud computing offer the operators the opportunity to go up the value chain by providing new and enhanced connectivity and computing products directly to consumers (B2C), enterprises (B2B) and developers (B2B2x).

Fixed and mobile networks have not been designed to expose their capabilities to third party for reservation, consumption and payment of discrete computing and connectivity services. Edge computing, as a new greenfield environment is a great place to start if an operator would like to offer these types of services. Because it is new, there is no legacy deployed and the underlying technology is closer to cloud native. This is necessary to create a developer and enterprise platform. Nonetheless, an abstraction layer is necessary to federate and orchestrate the edge compute infrastructure and provide a web-based authentication, management, reservation, fulfillment, consumption and payment model for enterprises and developers to contract these new telco services.
This is what a platform provides. An abstraction layer, that hides telco networks complexity, federates all edge computing capacity across various networks and operators and present a coherent marketplace for enterprise and developers to build and consume new services offered by the operator community as IaaS, PaaS and SaaS. By deploying a platform, operators can reintegrate the cloud supply chain, but they will have to decide whether they want to own the developer relationship (and build their own platform) or benefit from existing ecosystems (and deploy an existing third party platform). In the first case, it is a great effort, but the revenues flow directly to the operator, the platform is just another technology layer. In the second, revenues go to the platform provider and are shared with the operator. It provides faster time to market, but less control and margin. This model, in my mind is inevitable, it remains to be seen whether operators will be able to develop and deploy the first one in time and at scale.

Monday, May 11, 2020

Why Telcos need Open Core Surgery


 (This article was initially published in Light Reading)

At Mobile World Congress, TIP (the Telecom Infra Project, an industry forum created by Facebook and a number of leading telco operators and IT vendors), announced the creation of a new project group called Open Core Network. Details have starting to emerge last week, with a webinar.
The ambitious target of the group is to define and develop an open and disaggregated 4G Evolved Packet Core and 5G Core for wireless, wired, Wi-Fi on a variety of use cases.

We have seen in the recent past that various attempts to open up the telco cloud ecosystem and value chain have had contrasted results.
  • Telco clouds, based on VNFs and Openstack-like virtualization layer have mostly failed to reach critical mass in deployment and usability.
  •  ETSI-defined orchestration efforts based on open source projects such as OSM (Open Source Mano) and ONAP (Open Network Automation Platform) have been a work in progress and have equally, to date, failed to become automated telco networks app stores.
  • TIP has been successful with the definition, launch and deployment of Open RAN. We have recently seen announcements from Altiostar, Nokia and Cisco in Rakuten's network, as well as from Mavenir in Idea and DISH networks.


As we know, these efforts are aimed at disrupting the current telecom infrastructure provider cost structure by disaggregating traditional networks.
First by separating hardware from software, so that the solutions can be deployed in white boxes - Commercial Off The Shelf (COTS) hardware - rather than costly proprietary ones.
Second by breaking telecom functions into software elements that can be deployed, managed and sourced independently from each other. This is key in the sense that it allows new vendors to enter the ecosystem, who can specialize in specific elements rather than end-to-end solutions. This increases competition and allow a more flexible sourcing strategy, with either best-of-breed vendors for each elements or selection of vendors for fit-for-purpose use cases deployments. The key to enable this scenario is an architecture that is accepted by all, with well-defined software elements functions and more importantly, open, standard, rigid interfaces that guarantee that one vendor can be substituted by another without undue integration effort.

5G is supposed to be the first telco cloud network that is natively virtualized, software-defined, elastic and automated at scale. This can be achieved today by deploying a single vendor solution from one of the dominant telco vendors. Things start to complicate vastly if one wants to deploy a multi-vendor network. Since the standards are not quite finalized on some of the elements and behaviour of a 5G network and operators are announcing and launching 5G networks nonetheless, vendors have to fill the gaps with proprietary implementations, and extensions to the standards to make their end-to-end solution automated, software defined and elastic.

One last bastion of telco proprietary implementation is the Core network. The Core network is basically the brain of the telco network. All the consumer data is stored there, all the charging systems reside there, all the elements to decide where traffic should go and how it should be treated live in the Core. This brain is very complex and composed of a number of elements that have, until now, usually been sold and deployed from single vendors. This has long been a trojan horse for dominant telco vendors to control a network. It is also a self-perpetuating decision, as the evolution from one standard version to another or from one generation to another is much more cost effective as an upgrade of the current vendor's solution as opposed to a rip and replace by a new vendor. 
With 5G, the traditional vendors had a few different architectural options for Core deployment and they mostly elected a non-standalone (NSA) version, which can only be deployed as an upgrade to the 4G EPC. It essentially guarantees that a current 4G Core deployment will evolve to 5G with the same vendor, perpetuating the control over the network. This does not only affect the Core network, it also affects the Radio Access Network (RAN), as its implementation, in the early stage of 5G is dependent on an harmonious interworking with the Core. As a result, many traditional Core vendors who are also RAN vendors have created a situation where the only practical and economical way for an operator to launch 5G fast is to deploy Core and RAN from that same vendor. This situation perpetuates the oligopoly in telco supply chain, which reduces innovation and increase costs.

TIP's Open Core is an attempt to create a Core network for 4G and 5G that will be open, composed of software elements that will be provided by independent vendors, all using the same open interfaces to allow low-touch integration and increase the rate of innovation. If the group follows the same path as Open RAN, it could become a major disruption in telco networks, enabling for the first time in decades the possible deployment of a full telco network from a rich ecosystem of vendors and an innovation pace in sync with what we have seen from the hyperscaler world.


Friday, May 8, 2020

What are today's options to deploy a telco cloud?

Over the last 7 years, we have seen leading telcos embracing cloud technology as a mean to create an elastic, automated, resilient and cost effective network fabric. There has many different paths and options from a technological, cultural and commercial perspective.

Typically, there are 4 categories of solutions telcos have been exploring:

  • Open source-based implementation, augmented by internal work
  • Open source-based implementation, augmented by traditional vendor
  • IT / traditional vendor semi proprietary solution
  • Cloud provider solution


The jury is still out as to which option will prevail, as they all have seen growing pains and setbacks.

Here is a quick cheat sheet of some possibilities, based on your priorities:



Obviously, this table changes quite often, based on progress and announcements of the various players, but it can come handy if you want to evaluate, at high level, what are some of the options and pros / cons of deploying one vendor or open source project vs another.

Details, comments are part of my workshops and report on telco edge and hybrid cloud networks.

Thursday, April 23, 2020

Hyperscalers enter telco battlefront

We have, over the last few weeks, seen a flurry a announcements from hyperscalers investing in telco infrastructure and networks. Between Facebook's $5.7B investment in India's Jio Reliance, to Microsoft's acquisition of Affirmed Networks for $1.35B or even AWS' launch of Outpost and Google's Anthos ramp up.


Why are hyperscalers investing in telecom gear and why now?

Facebook had signalled its intent as far as 2016 when Mark Zuckerberg presented at mobile world congress his vision for the future of the company.


Beyond the obvious transition from picture and video sharing to virtual / augmented reality, tucked-in in the top right, are two innocuous words “telco infra”.
What Facebook realized is that basically anyone who has regular access to broadband will likely use a Facebook service. One way to increase the company’s growth is to invent / buy / promote more services, which is costly and uncertain. Another way is simply to connect more people.
With over 2,5 billion Facebook products users, the company still has some space to grow in this area, but the key limiting factor seems to be connectivity itself. The last billions of broadband unconnected are harder to attain because traditional telecom networks do not reach there. The last unconnected are mostly in rural area. Geographically disperse, with a lower income than their urban counterparts.
Looking at this problem from their perspective, Facebook reached a similar conclusion to the network operators operating in these markets. Traditional telco networks are too expensive to deploy and maintain to reach this population sustainably. The same tactics employed by operators to disaggregate and stimulate the infrastructure market can be refocused and better stimulated by Facebook.
This was the start of Facebook Connectivity, a specific line of business in the social media’s giant empire to change the cost structure of telco networks. Facebook connectivity has evolved to encompass a variety of efforts, ranging from the creation of TIP (an open forum to disaggregate and open telco networks), the co investment with Telefonica in a Joint Venture dedicated to connect the unconnected in latin america and this week, the announcement of its acquisition of 9.9% of Jio Reliance in India.


How about Microsoft, Google and others?

Google had, before the recent open source cloud platform Anthos dug their toes in telco water with project Fi and its fiber businesses.
Microsoft has been trying for he last 5 years to exploit the transition in telco networks from proprietary to IT. Even IBM's Redhat acquisition had a telco interest, as the giants also try to become a more prevalent vendor in the telco ecosystem.

So... why now?

Another powerful pivot point in Telecom is the emergence of 5G. As the latest telephony technology generation rolls out, telco networks are undeniably being re-architected and redesigned to look more like cloud networks. This creates an interesting set of risks and opportunities for incumbents and new entrants alike.
For operators, the main interest is to drastically reduce the cost of rolling out and maintaining complex telco networks by using powerful virtualization, SDN and automation techniques that have allowed hyperscalers to dominate cloud computing. These technologies, if applied correctly can transform the cost structure of network operators, particularly important at the outset of multi billion dollars investment in 5G infrastructure. The radical cost structure disruption comes from disaggregation of the network between hardware and software, the introduction of new vendors in the value chain who drive price pressure on incumbents and the widespread automation and cloud economics.
These opportunities bring also new risks. While they open up the supply chain with the introduction of new vendors, they also allow new actors to enter the value chain, either to substitute and dominate legacy vendors or create new control points (see the orchestrator wars I have been mentioning in previous posts). The additional risk is that the cost of entry into telco becomes lower for cloud hyperscalers as the technology to run telco networks transitions from proprietary closed ecosystem to open source, cloud environment.

The last pivot point is another telco technology that is very specifically aimed at creating a cloud environment in telco networks: Edge computing. It creates a cloud layer that can allow the provision, reservation and consumption of telco connectivity, together with cloud computing. As a greenfield environment, it is a natural entry point for cloud operators and new vendors alike to enter the telco ecosystem.

Facebook, Google, AWS, Microsoft and others seem to think that 5G and edge computing in particular will be more cloud than telco. Network operators try to resist this claim by building a 5G network that will be a fully integrated connectivity and computing experience, complementary to public clouds, but different enough to command a premium, a different value chain and operator control.

In which direction will the market move? This and more in my report and workshop Edge computing and Hybrid Clouds 2020.

Wednesday, April 15, 2020

The business cases of edge computing

Edge computing has been a trendy topic over the last year. Between AWS' launch of Outpost, Microsoft continuous effort with Azure Stack, Nvidia's specialized gaming version EGX platform or even Google's Anthos toolkit, much has been said about this market segment.
Network operators, on their side, have announced plans for deployments in many geographies, but with little, in terms of specific new services, revenues or expected savings.
Having been in the middle of several of these discussions, between vendors, hyperscalers, operators and systems integrators, I am glad to share a few thoughts on the subject.

Hyperscalers have not been looking at edge computing as a new business line, but rather as an extension of their current cloud capabilities. There are many use cases today that cannot be fully satisfied by the cloud, due to a combination of high / variable latency, network congestion, and lack of visibility / control of the last mile connectivity.
For instance, anyone having tried to edit online a diagram in powerpoint office 365 or to play a massive multiplayer online cloud game will recognize how maddeningly frustrating the experience can be.
Edge computing, as in bringing cloud resources closer physically to where data is consumed / produced makes sense to reduce latency and the need for on-premise dedicated resources. From an hyperscaler's perspective, edge computing can be as simple as dropping a few racks within an operator data center to allow their clients to use and configure new availability zones with specific performance and price.

Network operators, who have largely lost the cloud computing wholesale market to the hyperscalers, see edge computing as an opportunity to reintegrate the value chain, by offering cloud-like services at incomparable performance. Ideally, they would like to capture and retain the emerging high performance cloud computing market that will be sure to spurn a new category of digital services ranging from AI-augmented manufacturing and automation, autonomous vehicles, ubiquitous facial and object recognition and compute-less smart devices. The problem is that a lot of these hypothetical services are ill-defined, far fetched and futuristic, which does not inspire sufficient confidence to the CFO that has to approve multi - billion capital expenditure to get going.
But surely, if the likes of Microsoft, Intel, HP, Google, Facebook, AWS are investing in Edge Computing there must be something there? What are the operators missing to make the edge computing business case positive?

Mobile or multi access edge computing?

Many operators looked at edge computing first from the perspective of mobile. The mobile edge computing business case remains extremely uncertain. There is no identified use case that justifies the cost to deploy thousands of mini compute capabilities at mobile site in the short term. Even with the perspective of upgrading networks to 5G, the added cost of mobile edge computing is hard to justify.

If not in mobile site, the best bet to deploy edge computing for network operators is in Central Offices (CO). These facilities house commuting platforms for copper, fiber, DSL connectivity and are overdue for upgrade in many markets. The deployment of fibre, the copper replacement and the evolution of technology from GPON to XGS-PON and PON2 are excellent windows of opportunity to replace aging single-purposes infrastructure with open, software defined computing capability.
The level of investment for central offices retooling into mini data centers is orders of magnitude lower than the mobile case, and is completely flexible. It is not necessary to change all central offices, one can proceed by deploying one per state / province / region and increase capillary as business dictates.

What use cases would make edge computing's business case positive for operators in that scenario?


  • First, for operators who have triple and quadruple play, the opportunity to replace aging dedicated infrastructure for TV, fixed telephony, enterprise and residential connectivity by cloud native software defined open architecture provides interesting savings and benefits. The savings are realized from the separation of hardware and software, the sourcing and deployment of white boxes and the opex savings of separating control plane and centralizing and automating service elasticity. 
  • Additional savings are to be had with the deployment at the edges of content / video caches. Particularly for TV providers who see the increase of on-demand and unicast live traffic, positioning edge caches allow up to 80% savings in content transport. This is likely to increase with the upgrade from HD to 4K, 8K and growth in AR/VR.
  • At last, for operators who are deploying their CPE in their customers' home, edge computing allows to simplify and reduce drastically the cost of these equipments and their deployment / maintenance by bringing the services into the Central Office and reducing the need for storage and compute in the CPE.

While the savings can be significant in the long run, no operator can justify substituting existing infrastructure if its amortization is not fully realized on these premises alone. This is why some operators are looking at these scenarios only for greenfield fiber deployments or as part of massive copper replacement windows.
Savings alone in all likeliness won't allow operators to deploy at the rhythm necessary to counter hyperscalers. New revenues streams can also be captured with the deployment of edge computing.

  • For consumers, it is likely that the lowest hanging fruit in the short term is in gaming. While hyperscalers and gaming companies have launched their own cloud gaming services, their success has been limited due to the poor online experience. The most successful game franchises are Massive Multiplayer Online. They pitch dozens of players against each other and require a very controlled latency between all players for a fair and enjoyable gameplay. Only operators can provide controlled latency if they deploy gaming servers at the edge. Without a full blown gaming service, providing game caching at the edge can drastically reduce the download time for games, updates and patches, which increases dramatically player's service satisfaction.
  • For enterprise users, edge computing has dozens of use cases that can be implemented today that are proven to provide superior experience compared to the cloud. These services range from high performance cloud storage, to remote desktop, to video surveillance and recognition.
  • Beyond operators-owned services, the largest opportunity is certainly the enablement of edge as a service (EaaS), allowing cloud developers to use edge resources as specific cloud availability zones.
The main issue at this stage, for operators is to decide whether to let hyperscalers deploy their infrastructure in their network, capturing most of the value of these emerging services but also opening up a new line of revenue from wholesale hosting or trying to play it alone, as an operator or a federation of them, deploying a telco cloud infrastructure and building the necessary platform to resell edge compute resource in their networks.

This and a lot more use cases and business cases in my online workshop and report Edge Computing 2020.

Wednesday, February 26, 2020

Product management playbook 1

My first passion, and my first job before technology strategy is product management. While these two occupations are naturally intertwined, I miss the satisfaction of defining, curating, polishing a strategy, requirement sets, value proposition, positioning and see these finding their market fit.

I have recently helped a few tech companies in establishing or reinforcing product management functions in their organization and I thought I could share some of my playbook.


The roles of the product manager

Traditionally, product management functions are introduced in technology companies as soon as a structured process for receiving, categorizing and prioritizing market inputs to development teams is necessary.
A start up might develop a product or service based on its founder’s vision, but as soon as the value proposition is confronted to potential clients, new requirements emerge, gaps are identified and it becomes necessary to categorize and prioritize all these competing inputs into a coherent framework that facilitate the decision making process.
Product management is the result of scarcity of resources (engineering, hardware, software, time, investments…) and the ever-growing list of potential application of a new product or service (new clients, new geographies, new application, new market segment…).

The more interactions between the company and the market (clients’ meetings, competitive announcements, standards and technology evolution…) the more opportunities are identified for enhancing, evolving, adapting the product or service.
As the product matures, it invariably as well accumulates a technical debt, product of design and architectural concessions or compromises for time to market. This technical debt is a risk to be managed closely. As time goes on, the amount of correction, re architecture, rework that is necessary to correct a faulty initial design becomes extremely punitive.



The product management is responsible for knowing at any time, what the market needs, what the product can do today, what gap exists between these two positions and the level of risk the company is committing to when answering a RFP where gaps with the current offering exist.

How to prioritize requirements?


Every commercial company would like to think itself as market, sales or client oriented. Satisfying the client is the best and fastest way to generate commercial success. Unfortunately, there are many clients that do not know what they need, or that want something that is not what you can provide or that need you to adapt your offering to their proprietary environment to a degree that would make that product version only usable by them.

There are also companies that are differentiating through innovation, essentially creating a new product or market category. In that case, prospects are not necessarily the best guide for developing the product roadmap.

Technical debt is traditionally not a concept that sales or upper management want to concern themselves with. “Losing” time re architecting a product to deliver the same functionalities as today with months of delay can be perceived as a waste of time and commercial opportunity.

Evolution of standards, technology or competitive announcements have variable effect on the urgency to develop a feature vs. another, and the difficulty here is that the information is always imperfect, incomplete and requires a lot of analysis and experience to right-time the investment.
Bugs, errors, failures are always critical to solve when they affect a customer and their revenues, deciding on a workaround, a fix or a redesign is complex and requires a good grip on what the engineering team can produce and what to trade from the roadmap to deliver this.

These four examples are but a few of the decisions that a product management function have to grapple with. As a result, product management is unlikely to satisfy everyone. As the gatekeeper for product decisions, it interfaces with all the key functions in the enterprise and its decisions have an effect on all of them. For this reason, it is key that product management is completely aligned and supported by the management team and the CEO. 

Product management in the company

 The product management function is essentially a decision function. It must evaluate the market, the capacity of the engineering team and find the best market-fit for its product. It must be able to

  • create a value proposition, positioning and message for its product that is aligned with the company positioning and the market
  • translate market requirements into product, architecture and technology requirements,
  • create documentation and collaterals for sales support (brochures, infographics, white papers, business cases, presentations, demos…)
  • negotiate with clients, sales and sales support the roadmap, features, requirements, functional specifications, availability
  • negotiate with engineering the design, features, options, and availability
  • report to management team the product profitability, market position and market share…
  • Because of the range and variety of decisions involved, the product management function must traditionally report to the CEO, to ensure perfect alignment with the company vision, mission and priorities and to guarantee equal footing in negotiations and escalations resulting from its decisions. Because few in the company will know as much about the product, from a technical, strategic and commercial perspective together, product management is a position of trust and responsibility.

For these reasons, it is important that the methodology for receiving and cataloguing market inputs, translating and categorizing requirements, negotiating and recording results of discussions, communicating and reporting internally and externally be transparent, well understood by the management team and underpinned by strong processes.

Sunday, February 23, 2020

Telco growth: my objectives, vision, tactics, doctrine at Telefonica




As mentioned in my previous post, telco transformation through innovation and connectivity control requires a strong framework to guide the decision-making process. Here is a list of objectives, vision, strategies, tactics and doctrines that guided me through my time at Telefonica. I believe they can be adapted to many operators’ situation and organization to generate value through successful launch of new connectivity products.

Objectives:

  • Fast creation of new products and services by systematically leveraging economies of scale, reusing modular technical solutions and automation.
  • Creation of a toolbox of technological tools, operating models, best practices, documentation, blueprints, tests and certified solutions...
  • Deliver complete products, not just technology, but also operating model, suppliers value chain and devops teams...
  • Facilitate the transition from innovation to business
  • Systematically evaluate new technologies, suppliers in the laboratory and in the field
  • Fulfill our ambition to transform the industry


Vision:

Create a sustainable commercial growth factory for the company through the systematic research, implementation of services and products that achieve strategic, tactical, commercial, and technological advantages based on the network such as infrastructure or connectivity as a service.

Strategies:

  • Explore and classify services, market trends, competitive and direct and indirect movements and their technological evolution to identify risks and opportunities to create/destroy value for the company based on the network as infrastructure or connectivity as a service.
  • Creation or integration of network and IT technologies to disaggregate and control the cost structure of the purchase, implementation and deployment of connectivity functions and services.
  • Choice and implementation of disruptive connectivity services, products or businesses by designing the E2E value chain
  • Transfer of technological parts, services, products to commercial teams ready for production
  • Systematic identification of differential competitive advantages for the company and strategies to achieve their implementation
  • Implementation of innovative work and development methodologies, especially aimed at creating a DevOps/continuous development/continuous testing model for network technologies and connectivity services


Tactics:

  • Systematic disaggregation of high-level commercial systems and products of network and IT integration to identify manufacturers, intermediaries, sources of savings and their organizational and process impact
  •  Systematic prioritization of open source for MVPs, to learn the state of the art, limitations and development and integration needs
  • Projects, products, technology parts delivered with operating model, manufacturers / integrators / ecosystem developers
  • Identification and implementation of critical paths to deliver to the customer as fast as possible (MVPs, early prototypes deployed in commercial networks)


Doctrine:

  • Customer first
    • Development of services, projects, products with priority to the voice of the customer and the business over technology
  • One size does NOT fit all
    • Resist the model of trying to implement the same technology, solution, manufacturer for all parts of the network and all situations. Specification, design and development of technological and commercial solutions that are infinitely modular. Nothing monolithic, so that we can adapt the solutions to the realities of each market / segment
  • Always open
    • Technological development based on open models (APIs, standard and published interfaces, ...)
    • Open Source, wherever possible
    • Multi manufacturer and no lock-in by design
  • Modular, serverless when possible > micro services > containers > VMs > VNFs > PNF
  • Availability, generosity, active collaboration with commercial teams, third parties and transparency of communication
  • Systematic use from the design of
    • Data science
    • UX
    • Security
  • Agility, speed and results
  • Planning, development, iteration, continuous deliveries
  • Hypotheses, design, development, testing, ... Repeat
  • Pivot fast
  • Take calculated risks
  • Stop activities that fail to meet objectives
  • Organizational flexibility for team members to have diverse and multi-project responsibilities, and can also change during the life cycle of each project
  • Self-management and organizational structures with minimal hierarchy
  • Simple and cheap
  • Systematic simplification of legacy
  • Good enough, cheap > >  over engineered and expensive
  • DevOps
  • Continuous development



If you would like more details, feel free to reach out, I have developed an innovation / transformation workshop to put in practice some of these strategies.
Also available:

Thursday, February 20, 2020

Telco relevance and growth

I am often asked what I think are the necessary steps for network operators to return to growth. This is usually a detailed discussion, but at a high level, I think a key to operators' profitability is in creating network services that are differentiated.
I have seen so much value being created for consumers and enterprises at Telefonica when we started retaking control of the connectivity, that I think there are some universal lessons to be learned there.

Curating experiences

Creating differentiated network services doesn't necessarily mean looking at hyper futuristic scenarios that entail autonomous drones or remote surgery. While these are likely to occur in the next 10 to 20 years, there is plenty today that can be done to better user experiences.
For instance, uploading large files or editing graphics files in the cloud is still slow and clumsy. Also, broadband networks' advertised speed has become meaningless for most consumers. How can you have a 600mbps connection and still suffer from pixelated video stream or a lagging gaming session? There are hundreds of these unsatisfactory experiences that could benefit from better connectivity.

These nonoptimal experiences can be where operators can start creating value and differentiating themselves. Afterall, operators own their networks; since they do not rely on the open internet for transport, they should presumably be able to control the traffic and user experience at a granular level? A better connectivity experience is not always synonymous with more speed, in most case it means a control debit, latency and volume.

Accepting this, means that you have to recognize that the diktat of "one size fits all" is over for your network. You cannot create a connectivity product that is essentially the same for everyone, whether they are a teenage gamer, an avid video streaming fan, an architect office, a dentist or a bank branch. They all have different needs, capabilities, price elasticity and you can't really believe that your network will be able to meet all their needs simultaneously without more control. Growth is unlikely to come in the future for everyone paying the same price for the same service. There are pockets of hyper profitability to extract, but they need a granular control of the connectivity.

"Vanilla" connectivity for all will not grow in terms of revenue per user with more general speed.

Being able to create differentiated experience for each segment  means certainly being able to identify and measure them. That's the easy part.  Operators mostly have a good, granular grasp on their market segments. The hard part is finding out what these segments want / need and are willing to pay. The traditional approach is to proceed by creating a value proposition, based on a technology advance, test it in market studies, focus groups, limited trials and trials at scale before national launch.

While this might work well for services that are universal and apply to a large part of the population, identifying the micro segments that are willing to pay more for a differentiated connectivity experience requires a more granular approach. Creating experiences that delight the customers is usually not the result of a marketing genius that had it all planned in advance. In my experience, creating, identifying and nurturing this value comes from the contact with the client, letting them experience the service. There are usually many unintended consequences when one starts playing with connectivity. Many of successful telco services are the fruit of such unintended consequences (texting was initially a signalling protocol for instance).

Programmable networks

One way to create and curate such experiences is to increase your control on the connectivity. This means disaggregate, virtualize and software-define the elements of your access (virtualize the OLT and the RAN, built a programmable SDN layer).
You should accept that you can't a priori really understand what your customers will value without testing it. There will be a lot of unintended consequences (positive and negative). It is therefore necessary to create a series of hypothesis that you will systematically test with the customer to validate or discard them. These tests must happen "in the wild" with real customers, because there are invariably also many unintended consequences in deploying in live networks with real population compared to in a lab with "friends and family" users.
In average, you might need to test 50-60 variants to find 2 or 3 successful services. In telecom-years, that's about 100 years at today's development / testing cycles. But if you have a programmable networks, and know how to program, these variants can be created and tested at software speed.

Therefore, you need to test often and pivot fast and you need to be able to test with small, medium and large samples. The key for this is to build an end to end CI/CD lab that is able to coarsely reproduce your network setup from the core, the access and transport perspective. It needs to be software defined with open interfaces, so that you can permutate, swap and configure new elements on-demand.

Since current networks and elements are so complex and proprietary, you need to identify greenfields and islands of wilderness in your connectivity where you will be able to experiment in isolation without disrupting your core customer base. At Telefonica, these uncharted connectivity fields were rural networks and edge computing, in other networks, AI-augmented networks operation, network slicing or 5G could be perfect experimentation grounds.

Pluridisciplinary teams

Another learning is that not integrating user / customer feedback at every stage of the elaboration of the service is deadly. It is necessary that UX designers be part of the process from the inception and throughout. They might not be as heavily involved in some phases (development) than others (inception, beta, trial...) so they can be shared across projects.
Increasingly, data science, security and privacy good practices need to be considered also throughout the projects pivot points. In many cases, it is difficult, expensive or impossible to retrofit them if they were not part of the original design.
Products and services do not necessarily need large teams to take off the ground and create value, but they do need dedication and focus. Resist the temptation to have the core team work cross-project. What you gain by identifying possible synergies, you lose in velocity. Rather have small dedicated teams with core members and specialists that are lent from project to project for periods of time.
Foster internal competition. Evaluate often and be ready to pivot or kill projects.

Paradoxically, when you find a successful service, in many organization, the phase in which these projects are most likely to die is when transitioning to the products and business teams. The key is possibly for these not to transition. I have long advocated that it is easier for an operator to launch 5G as a separate company than as an evolution. But it is impractical for many operators to consider starting a parallel organization for network transformation.These innovations, if they are to transform the way the networks and services are managed must be accompanied by a continuous training process and a constant resource rotation between innovative and live projects. Therefore transformation and innovation is not the work of a dedicated team, but of the whole workforce and everyone has opportunity to participate in innovation projects, from inception to delivery.


Beyond the "how", the teams need a clear framework to guide them in their daily decision making. The "what" needs to be oriented by a vision, strategies, tactics and a doctrine that will explore in a subsequent post.

Please share your experience with transformation and innovation projects in the telco world. We all grow by sharing. "A rising tide lifts all boats".

Interested in how these principles were applied to the creation of the Open RAN market? contact me for a copy of the report "xRAN 2020".