Tuesday, October 6, 2020

Telco grade or Cloud grade?

 

For as long as I can remember, working in Telco, there has been the assumption that Telco networks were special. 

They are regulated, they are critical infrastructure, they require a level of engineering and control that goes beyond traditional IT. This has often been the reason why some technologies and vendors haven't been that successful in that space, despite having stellar records in other equally (more?) demanding industries such as energy, finance, space, defence...

Being Telco grade, when I cut my teeth as a telco supplier, meant high availability (5x9's), scalability and performance (100's of millions of simultaneous streams, connections, calls, ...), security, achieved with multiple vertical and horizontal redundancies, and deployed of highly specialized appliances.

Along comes the Cloud, with its fancy economics, underpinned by separation of hardware and software, virtualization, then decomposition, then disaggregation of software elements into microservices. Add to it some control / user plane separation, centralized control, management, configuration, deployment, roll out, scalability rules... a little decentralized telemetry and systematic automation through radical opening of API between layers... That's the recipe for Cloud grade networks.

At the beginning, the Telco-natives looked at these upstarters with a little disdain, "that's good for web traffic. If a request fail, you just retry, it will never be enough for Telco grade...". 

Then with some interest "maybe we can use that Cloud stuff for low networking, low compute stuff like databases, inventory management... It's not going to enable real telco grade stuff, but maybe there is some savings".

Then, more seriously "we need to harness the benefits of the cloud for ourselves. We need to build a Telco cloud". This is about the time the seminal white paper on Telco virtualization launched NFV and a flurry of activities to take IT designed cloud fabric (read Openstack) and make it Telco grade (read pay traditional Telco vendors who have never developed or deployed a cloud fabric at scale and make proprietary branches of an open source project hardened with memorable features such as DPDK SR-IOV, CPU pinning so that the porting of their proprietary software on hypervisor does not die under the performance SLA...). 

Fast forward a few years, orchestration and automation become the latest targets, and a zoo of competing proprietary-turned-open-source projects start to emerge, whereas large communities of traditional telco vendors are invited to contribute charitably time and code on behalf of Telcos for projects that they have no interest in developing or selling.

In the meantime, Cloud grade has grown in coverage, capacity, ecosystem, revenues, use cases, flexibility, availability, scalability... by almost any metrics you can imagine, while reducing costs and prices. Additionally, we are seeing new "cloud native" vendors emerge with Telco products that are very close to the Telco grade ideal in terms of performance, availability, scalability, at a fraction of the cost of the Telco-natives. Telco functions that the Telco-native swore could never find their way to the cloud are being deployed there, for security, connectivity, core networks, even RAN...

I think it is about time that the Telco-natives accept and embrace that it is probably faster, more cost efficient and more scalable to take a Cloud-native function and make it Telco-grade than trying to take the whole legacy Telco network and trying to make it Cloud grade. It doesn't mean to throw away all the legacy investment, but at least to consider sunsetting strategy and cap and grow. Of course, it means also being comfortable with the fact that the current dependencies of traditional Telco vendors might have to be traded for dependencies on hyperscalers, who might, or not become competitors down the line. Not engaging with them, si not going to change that fact. 5G stand alone, Open RAN or MEC are probably good places to start, because they are greenfield. This is where the smart money is these days, as entry strategy into Telco world goes...



Friday, September 18, 2020

Rakuten: the Cloud Native Telco Network

Traditionally, telco network operators have only collaborated in very specific environments; namely standardization and regulatory bodies such as 3GPP, ITU, GSMA...

There are a few examples of partnerships such as Bridge Alliance or BuyIn mostly for procurement purposes. When it comes to technology, integration, product and services development, examples have been rare of one carrier buying another's technology and deploying it in their networks.

It is not so surprising, if we look at how, in many cases, we have seen operators use their venture capital arm to invest in startups that end up rarely being used in their own networks. One has to think that using another operator's technology poses even more challenges.

Open source and network disaggregation, with associations like Facebook's Telecom Infra Project, the Open Networking Foundation (ONF) or the Linux Foundation O-RAN alliance have somewhat changed the nature of the discussions between operators.

It is well understood that the current oligopolistic situation in terms of telco networks suppliers is not sustainable in terms of long term innovation and cost structure. The wound is somewhat self-inflicted, having forced vendors to merge and acquire one another in order to be able to sustain the scale and financial burden of surviving 2+ years procurement processes with drastic SLAs and penalties.

Recently, these trends have started to coalesce, with a renewed interest for operators to start opening up the delivery chain for technology vendors (see open RAN) and willingness to collaborate and jointly explore technology development and productization paths (see some of my efforts at Telefonica with Deutsche Telekom and AT&T on network disaggregation).

At the same time, hyperscalers, unencumbered by regulatory and standardization purview have been able to achieve global scale and dominance in cloud technology and infrastructure. With the recent announcements by AWS, Microsoft and Google, we can see that there is interest and pressure to help network operators achieving cloud nativeness by adopting the hyperscalers models, infrastructure and fabric.

Some operators might feel this is a welcome development (see Telefonica O2 Germany announcing the deployment of Ericsson's packet core on AWS) for specific use cases and competitive environments. 

Many, at the same time are starting to feel the pressure to realize their cloud native ambition but without hyperscalers' help or intervention. I have written many times about how telco cloud networks and their components (Openstack, MANO, ...) have, in my mind, failed to reach that objective. 

One possible guiding light in this industry over the last couple of years has been Rakuten's effort to create, from the ground up, a cloud native telco infrastructure that is able to scale and behave as a cloud, while providing the proverbial telco grade capacity and availability of a traditional network. Many doubted that it could be done - after all, the premise behind building telco clouds in the first place was that public cloud could never be telco grade.

It is now time to accept that it is possible and beneficial to develop telco functions in a cloud native environment.

Rakuten's network demonstrates that it is possible to blend traditional and innovative vendors from the telco and cloud environments to produce a cloud native telco network. The skeptics will say that Rakuten has the luxury of a greenfield network, and that much of its choices would be much harder in a brownfield environment.




The reality is that whether in the radio, the access, or the core, in OSS or BSS, there are vendors now offering cloud native solutions that can be deployed at scale with telco-grade performance. The reality as well is that no all functions and not all elements are cloud native ready. 

Rakuten has taken the pragmatic approach to select from what is available and mature today, identifying gaps with their ideal end state and taking decisive actions to bridge the gaps in future phases.




Between the investment in Altiostar, the acquisition of Innoeye and the joint development of a cloud native 5G Stand Alone Core with NEC, Rakuten has demonstrated vision clarity, execution and commitment to not only be the first cloud native telco, but also to be the premier cloud native telco supplier with its Rakuten Mobile Platform. The latest announcement of a MoU with Telefonica could be a strong market signal that carrieres are ready to collaborate with other carriers in a whole new way.


Thursday, August 27, 2020

Edge computing risks and opportunities for operators and hyperscalers

 Part of a presentation to the World Bank's investment teams regarding institutional investment in ICT for emerging countries.


Friday, July 31, 2020

Objectives of xRAN


The primary objective of xRAN was to change the cost of designing, purchasing and operating Radio Access Networks. This can be achieved by a variety of means:

Software virtualization

Traditional RAN vendors provide integrated proprietary hardware and software solutions for their equipment. Separating the hardware from the software, and virtualizing the latter yields a variety of benefits:
Part of the hardware can now be purchased commercial off the shelf, based on cost efficient white box designs.
Virtualized software is able to make full use of Software Defined Networking (SDN). When a software is virtualized using virtual machines, we use virtual bridges to connect the VMs with the physical servers. We use virtual switches such as OVS to optimize the servers utilization. Cables and physical switches are used to connect physical servers between each other. Hyperscalers have, early on identified that white box switches can be deployed at a fraction of the cost of the proprietary switches used in telco networks. It is very difficult, in practice to orchestrate VMs that are not on the same servers, as well as VMs with the physical servers’ capacity.
In a software-defined network, the decision-making processes for the categorization, management and routing of IP traffic is separated from the software functions and centralized in the form of a Controller. That Controller can expose (northbound) interfaces to define the rules for traffic handling and (southbound) interfaces to program the traffic management elements. This enables to create sophisticated traffic rules to optimize for performance, latency, congestion or failure avoidance. When applied to the RAN, SD RAN is sometimes used to describe these systems.
A SD RAN can be managed remotely, from a controller API on a web interface, rather than dialing into each network element separately, either remotely of physically. This yields operational savings inasmuch as less in-the-field maintenance is necessary and technicians do not need to physically access the equipment to perform upgrades, patches and maintenance.

 Open interfaces and solution disaggregation

RANs are composed of a variety of elements that are tightly integrated in a traditional solution. The interfaces and protocols linking these elements are closed, which means that only the vendor of the solution can perform a change because even if they are using standards based interfaces, they augment them with proprietary parameters and headers.. Opening these interfaces means designing, specifying and enforcing the implementation of standards-based interfaces between these elements without any modification. This yields a variety of benefits:
  •      These elements can be scaled independently from each other.
  •      Since the interfaces are open, you can replace an element from one vendor by one from another vendor with minimum testing and integration.
  •        It is possible to deploy elements from different vendors in the same network configuration, allowing best of breed deployment for specific use cases

Value chain disaggregation

A key means to reduce the cost structure of the traditional RAN value chain is to reduce the dependency on powerful vendors. This can be achieved by breaking the RAN products and services into modular components and essentially go directly to the Original Design Manufacturer (ODM) to try and get better commercial conditions. This tactic is not always effective. These ODM might be eager to get closer to the end customer and shortening the intermediary, but they are usually weary as well to discontent their current customers (OEMs) with whom they have long term relationship and volume commitments. Additionally, the traditional vendors provide valuable design, integration and testing work which now has to be carried by the operator or their new subcontractors.
A better solution is to try and stimulate the market to see the emergence of new suppliers to challenge the supremacy of the traditional vendors. This can be achieved in a variety of ways. Many operators have an investment arm or an incubator or accelerator for start ups. A telecom operator entering a technology company’s capital as a strategic investor is a good signal to the market. When several operators invest in companies in the same market segment, it shows that it is strategic and forces other venture capital companies to evaluate whether they should invest in similar companies. It spurs, in turn, entrepreneurs to create start ups in that area. It is a difficult virtuous circle to create and takes time, but it is powerful once it has sufficient momentum.
Another tactic is the in-house development of a new solution, together with the creation of an open source community. The seed development does not need to be huge, but it needs to show steadfast and long-term commitment to developer community for interested parties to adhere to the project. Open source development can become a force multiplier as start-ups emerge to industrialize and resell the shared code.
The last tactic, and probably the most effective, is the purchase and deployment of these new vendors products and services. Telco operators are notoriously slow to take purchasing decisions and the sales engagement is a marathon, from presentations, to demonstrations, to proof of concept, to lab deployment, to integration tests and hundreds other processes before deployment in the field. It is not surprising therefore, that the best suited vendors are those with robust project / program management and deep pockets, able to sustain long sales cycle by their market reach and scale. Start-ups are usually ill-equipped to sell to large operators and more of them have died during the sales cycle than have emerged successful. There is nothing like focus and the ability to test, refine and purchase volumes of start ups products and services in a short timeframe to signal to the market that an operator is serious about it.
All these tactics combined have seen the emergence of a class of RAN suppliers, smaller, more agile, more efficient than their traditional counterparts. They certainly have a higher risk profile, as they are not as financially sustainable and haven’t reached the operational excellence operators demand in their market, but the cost-ratio with their counterpart is sufficiently important that some operators feel these vendors are good enough for specific use cases.
The introduction of these new vendors in the value chain, with their lower price points and their more open interfaces forces the traditional vendors to adapt their offering, by compressing their margins and / or developing equivalent product lines.

Tuesday, July 14, 2020

Product management playbook 2




As mentioned in a previous post, product management remains my core skill set. Over the years, I have used many resources such as pragmatic marketing and have adapted them to fit my purpose.

I continue here some findings and perspective on the product management function in a tech company.


The product management key tasks and deliverables

By nature, product management is articulated around three main domains


  • Product strategy:
    • evaluation of the market, its dynamics, the competition, the clients, the standards, the regulation, the prices and costs. Negotiations with sales and management
    • Inventory of market requirements, release frequency and timing, price book, market share, P&L…
  • Product technology:
    • evolution of the technology, review of the architectural and design principles of the product and releases, negotiation with engineering and presales,
    • technical product requirements, release content, technical roadmap, engineering debt
  • Product marketing
    • Production marketing collaterals, presentation in trade shows and channels management
    • Product presentations, brochures, web sites, demos,
Many product managers are proficient in one or two, rarely in three dimensions. 




Market Analysis

Market analysis is a function that can be separate from product management and sit for instance in research or in marketing in large companies. It provides the situational awareness necessary to understand:
  • What are the problems in the market the product can solve?
  • Who are the competitors and what are their trajectory and SWOT?
  • What is the size of the market today and in the future? What market share can be attained?
The technological dimension of market analysis is a natural complement to the strategic part, evaluating standards and competitive capabilities, gap with current offering as well as the evolution of the technology state of the art and timing to include in the roadmap.

Quantitative analysis

Depending on the maturity and seniority of the product management team, quantitative analysis can be an integral part of the product management function. It relies on reviewing the product’s performance against the market and the customers demand and on sizing each opportunity in terms of cost, cost-benefit and price to facilitate decision taking as it pertains to committing to development work that are beyond or outside the roadmap.
The conditions for rigorous quantitative analysis are a strong financial background as well as management guidance in terms of gross margin objectives and conditions for deviation.
A controversial topic in quantitative analysis is the product pricing and pricing strategy. Traditionally reserved to sales and management functions in start up environments, whenever a company scales and has strong financial guidance in terms of costs and margins, this responsibility tends to transition to where the decisions can be taken in terms of trade off between cost opportunity in the roadmap: product management.
Sales functions have a strong input, in many cases a veto or overwrite capacity, but it is important that product management can record and show what would be the price of a feature, a change request, in normal conditions, with a normal customer, before are applied extenuating circumstances such as end of quarter pressure, strategic customer, competitive pressure, etc.

Product strategy

Product strategy is charting the position of the product offering in the market. It requires a good understanding of the current position, a strong vision of the direction to take, as well as the creation of unfair competitive advantages. This part of product management is usually more qualitative and an effort to put metrics and quantitative KPIs for decision making usually revolve around the creation of business cases. These documents, financial in nature present the product or release market potential in terms of revenue, margin contribution, costs and effort for approval from management prior to commitment to develop.

Product planning

Product planning is the core of the product management function and relates to the charting in time of the product evolution. It requires a strong inventory system and series of tools for the recording and tracking of user stories, requirements, features, releases… into a coherent, resource and time-constrained framework. The roadmap, the release milestones, their content in features, the dates of availability and non functional requirements are some of the artefacts produced within this discipline.

Program strategy

The program strategy is a market facing phase that consists in crafting the product or release market positioning and associated high level collaterals. It also touches upon the go to market strategy and helps with the channel, VARs and intermediaries, both in terms of selection, alliance management and creation of specific collaterals.

Sales readiness

Sales readiness is an important part of the product management function. In many cases, the product manager is an important part of the sales cycle when it is necessary to support customer meetings, RFP defense, partner meetings, trade shows, etc… As the number of sales people and prospects increases, it is easy for a product manager to find herself pulled in many travels. While it is an important part of product management to get customer feedback and ideas, many customer meetings are more sale than ideation. This system does not scale well unless a strong pre sales / sales engineering function is put in place. It becomes then crucial and the responsibility of product management to ensure that this team is trained and equipped with the collaterals to support the sales function and process.

Sales Support

Good product managers are sales best friends. They know the product in depth and can convince the most skeptical customers. Unfortunately, since there is traditionally one or a couple of product managers per product and several hundreds or thousands potential clients, it is difficult for product management to assist in all sales meetings. The sales activities that require product management travel should be prioritize to ensure that the best support can be provided for the most important meetings. Product management must produce the training and collaterals necessary for pre sales and sales engineering to perform sales support efficiently.