Showing posts with label Google. Show all posts
Showing posts with label Google. Show all posts

Thursday, June 20, 2024

Telco grade or cloud grade ? II

I have oftentimes criticized network operators’ naivety when it comes to their capacity to convince members of the ecosystem to adopt their telco idiosyncrasies.

Tuesday, November 7, 2023

What's behind the operators' push for network APIs?

 


As I saw the latest announcements from GSMA, Telefonica and Deutsche Telekom, as well as the asset impairment from Ericsson on Vonage's acquisition, I was reminded of the call I was making three years ago for the creation of operators platforms.

One one hand, 21 large operators (namely, America Movil, AT&T, Axiata, Bharti Airtel, China Mobile, Deutsche Telekom, e& Group, KDDI, KT, Liberty Global, MTN, Orange, Singtel, Swisscom, STC, Telefónica, Telenor, Telstra, Telecom Italia (TIM), Verizon and Vodafone) within the GSMA launch an initiative to open their networks to developers with the launch of 8 "universal" APIs (SIM Swap, Quality on Demand, Device Status, Number Verification, Simple Edge Discovery, One Time Password SMS, Carrier Billing – Check Out and Device Location). 

Additionally, Deutsche Telekom was first to pull the trigger on the launch of their own gateway "MagentaBusiness API" based on Ericsson's depreciated asset. The 3 APIs launched are Quality-on-demand, Device Status – Roaming and Device Location, with more to come.

Telefonica, on their side launched shortly after DT their own Open Gateway offering with 9 APIs (Carrier Billing, Know your customer, Number verification, SIM Swap, QOD, Device status, Device location, QOD wifi and blockchain public address).

On the other hand, Ericsson wrote off 50% of the Vonage acquisition, while "creating a new market for exposing 5G capabilities through network APIs".

Dissonance much? why are operators launching network APIs in fanfare and one of the earliest, largest vendor in the field reporting asset depreciation while claiming a large market opportunity?

The move for telcos to exposing network APIs is not new and has had a few unsuccessful aborted tries (GSMA OneAPI in 2013, DT's MobiledgeX launch in 2019). The premises have varied over time, but the central tenet remains the same. Although operators have great experience in rolling out and operating networks, they essentially have been providing the same connectivity services to all consumers, enterprises and governmental organization without much variation. The growth in cloud networks is underpinned by new generations of digital services, ranging from social media, video streaming for consumers and cloud storage, computing, CPaaS and IT functions cloud migration for enterprises. Telcos have been mostly observers in this transition, with some timid tries to participate, but by and large, they have been quite unsuccessful in creating and rolling out innovative digital services. As Edge computing and Open RAN RIC become possibly the first applications forcing telcos to look at possible hyperscaler tie-ins with cloud providers, it raises several strategic questions.

Telcos have been using cloud fabric and porting their vertical, proprietary systems to cloud native environment for their own benefit. As this transition progresses, there is a realization that private networks growth are a reflection of enterprises' desire to create and manage their connectivity products themselves. While operators have been architecting and planning their networks for network slicing, hoping to sell managed connectivity services to enterprises, the latter have been effectively managing their connectivity, in the cloud and in private networks themselves without the telcos' assistance. This realization leads to an important decision: If enterprises want to manage their connectivity themselves and expand that control to 5G / Cellular, should Telcos let them and if yes, by what means?

The answer is in network APIs. Without giving third party access to the network itself, the best solution is to offer a set of controlled, limited, tools that allow to discover, reserve and consume network resources while the operator retains the overall control of the network itself. There are a few conditions for this to work. 

The first, is essentially the necessity for universal access. Enterprises and developers have gone though the learning curve of using AWS, Google cloud and Azure tools, APIs and semantic. They can conceivably see value in learning a new set with these Telco APIs, but wont likely go through the effort if each Telco has a different set in different country.

The second, and historically the hardest for telcos is to create and manage an ecosystem and developer community. They have tried many times and in different settings, but in many cases have failed, only enlisting friendly developers, in the form of their suppliers and would be suppliers, dedicating efforts to further their commercial opportunities. The jury is still out as to whether this latest foray will be successful in attracting independent developers.

The third, and possibly the most risky part in this equation, is which APIs would prove useful and whether the actual premise that enterprises and developers will want to use them is untested. Operators are betting that they can essentially create a telco cloud experience for developers more than 15 years after AWS launched, with less tools, less capacity to innovate, less cloud native skills and a pretty bad record in nurturing developers and enterprises.

Ericsson's impairment of Vonage probably acknowledges that the central premise that Telco APIs are desirable is unproven, that if it succeeds, operators will want to retain control and that there is less value in the platform than in the APIs themselves (the GSMA launch on an open source platform essentially directly depreciates the Vonage acquisition).

Another path exist, which provides less control (and commercial upside) for Telcos, where they would  host third party cloud functions in their networks, even allowing third party cloud infrastructure (such as Amazon Outpost for instance) to be collocated in their data centers. This option comes with the benefit of an existing ecosystem, toolset, services and clients, just extending the cloud to the telco network. The major drawback is that the telco accepts their role as utility provider of connectivity with little participation in the service value creation.

Both scenarios are being played out right now and both paths represent much uncertainty and risks for operators that do not want to recognize the strategic implications of their capabilities.


Friday, September 18, 2020

Rakuten: the Cloud Native Telco Network

Traditionally, telco network operators have only collaborated in very specific environments; namely standardization and regulatory bodies such as 3GPP, ITU, GSMA...

There are a few examples of partnerships such as Bridge Alliance or BuyIn mostly for procurement purposes. When it comes to technology, integration, product and services development, examples have been rare of one carrier buying another's technology and deploying it in their networks.

It is not so surprising, if we look at how, in many cases, we have seen operators use their venture capital arm to invest in startups that end up rarely being used in their own networks. One has to think that using another operator's technology poses even more challenges.

Open source and network disaggregation, with associations like Facebook's Telecom Infra Project, the Open Networking Foundation (ONF) or the Linux Foundation O-RAN alliance have somewhat changed the nature of the discussions between operators.

It is well understood that the current oligopolistic situation in terms of telco networks suppliers is not sustainable in terms of long term innovation and cost structure. The wound is somewhat self-inflicted, having forced vendors to merge and acquire one another in order to be able to sustain the scale and financial burden of surviving 2+ years procurement processes with drastic SLAs and penalties.

Recently, these trends have started to coalesce, with a renewed interest for operators to start opening up the delivery chain for technology vendors (see open RAN) and willingness to collaborate and jointly explore technology development and productization paths (see some of my efforts at Telefonica with Deutsche Telekom and AT&T on network disaggregation).

At the same time, hyperscalers, unencumbered by regulatory and standardization purview have been able to achieve global scale and dominance in cloud technology and infrastructure. With the recent announcements by AWS, Microsoft and Google, we can see that there is interest and pressure to help network operators achieving cloud nativeness by adopting the hyperscalers models, infrastructure and fabric.

Some operators might feel this is a welcome development (see Telefonica O2 Germany announcing the deployment of Ericsson's packet core on AWS) for specific use cases and competitive environments. 

Many, at the same time are starting to feel the pressure to realize their cloud native ambition but without hyperscalers' help or intervention. I have written many times about how telco cloud networks and their components (Openstack, MANO, ...) have, in my mind, failed to reach that objective. 

One possible guiding light in this industry over the last couple of years has been Rakuten's effort to create, from the ground up, a cloud native telco infrastructure that is able to scale and behave as a cloud, while providing the proverbial telco grade capacity and availability of a traditional network. Many doubted that it could be done - after all, the premise behind building telco clouds in the first place was that public cloud could never be telco grade.

It is now time to accept that it is possible and beneficial to develop telco functions in a cloud native environment.

Rakuten's network demonstrates that it is possible to blend traditional and innovative vendors from the telco and cloud environments to produce a cloud native telco network. The skeptics will say that Rakuten has the luxury of a greenfield network, and that much of its choices would be much harder in a brownfield environment.




The reality is that whether in the radio, the access, or the core, in OSS or BSS, there are vendors now offering cloud native solutions that can be deployed at scale with telco-grade performance. The reality as well is that no all functions and not all elements are cloud native ready. 

Rakuten has taken the pragmatic approach to select from what is available and mature today, identifying gaps with their ideal end state and taking decisive actions to bridge the gaps in future phases.




Between the investment in Altiostar, the acquisition of Innoeye and the joint development of a cloud native 5G Stand Alone Core with NEC, Rakuten has demonstrated vision clarity, execution and commitment to not only be the first cloud native telco, but also to be the premier cloud native telco supplier with its Rakuten Mobile Platform. The latest announcement of a MoU with Telefonica could be a strong market signal that carrieres are ready to collaborate with other carriers in a whole new way.


Friday, May 8, 2020

What are today's options to deploy a telco cloud?

Over the last 7 years, we have seen leading telcos embracing cloud technology as a mean to create an elastic, automated, resilient and cost effective network fabric. There has many different paths and options from a technological, cultural and commercial perspective.

Typically, there are 4 categories of solutions telcos have been exploring:

  • Open source-based implementation, augmented by internal work
  • Open source-based implementation, augmented by traditional vendor
  • IT / traditional vendor semi proprietary solution
  • Cloud provider solution


The jury is still out as to which option will prevail, as they all have seen growing pains and setbacks.

Here is a quick cheat sheet of some possibilities, based on your priorities:



Obviously, this table changes quite often, based on progress and announcements of the various players, but it can come handy if you want to evaluate, at high level, what are some of the options and pros / cons of deploying one vendor or open source project vs another.

Details, comments are part of my workshops and report on telco edge and hybrid cloud networks.

Thursday, April 23, 2020

Hyperscalers enter telco battlefront

We have, over the last few weeks, seen a flurry a announcements from hyperscalers investing in telco infrastructure and networks. Between Facebook's $5.7B investment in India's Jio Reliance, to Microsoft's acquisition of Affirmed Networks for $1.35B or even AWS' launch of Outpost and Google's Anthos ramp up.


Why are hyperscalers investing in telecom gear and why now?

Facebook had signalled its intent as far as 2016 when Mark Zuckerberg presented at mobile world congress his vision for the future of the company.


Beyond the obvious transition from picture and video sharing to virtual / augmented reality, tucked-in in the top right, are two innocuous words “telco infra”.
What Facebook realized is that basically anyone who has regular access to broadband will likely use a Facebook service. One way to increase the company’s growth is to invent / buy / promote more services, which is costly and uncertain. Another way is simply to connect more people.
With over 2,5 billion Facebook products users, the company still has some space to grow in this area, but the key limiting factor seems to be connectivity itself. The last billions of broadband unconnected are harder to attain because traditional telecom networks do not reach there. The last unconnected are mostly in rural area. Geographically disperse, with a lower income than their urban counterparts.
Looking at this problem from their perspective, Facebook reached a similar conclusion to the network operators operating in these markets. Traditional telco networks are too expensive to deploy and maintain to reach this population sustainably. The same tactics employed by operators to disaggregate and stimulate the infrastructure market can be refocused and better stimulated by Facebook.
This was the start of Facebook Connectivity, a specific line of business in the social media’s giant empire to change the cost structure of telco networks. Facebook connectivity has evolved to encompass a variety of efforts, ranging from the creation of TIP (an open forum to disaggregate and open telco networks), the co investment with Telefonica in a Joint Venture dedicated to connect the unconnected in latin america and this week, the announcement of its acquisition of 9.9% of Jio Reliance in India.


How about Microsoft, Google and others?

Google had, before the recent open source cloud platform Anthos dug their toes in telco water with project Fi and its fiber businesses.
Microsoft has been trying for he last 5 years to exploit the transition in telco networks from proprietary to IT. Even IBM's Redhat acquisition had a telco interest, as the giants also try to become a more prevalent vendor in the telco ecosystem.

So... why now?

Another powerful pivot point in Telecom is the emergence of 5G. As the latest telephony technology generation rolls out, telco networks are undeniably being re-architected and redesigned to look more like cloud networks. This creates an interesting set of risks and opportunities for incumbents and new entrants alike.
For operators, the main interest is to drastically reduce the cost of rolling out and maintaining complex telco networks by using powerful virtualization, SDN and automation techniques that have allowed hyperscalers to dominate cloud computing. These technologies, if applied correctly can transform the cost structure of network operators, particularly important at the outset of multi billion dollars investment in 5G infrastructure. The radical cost structure disruption comes from disaggregation of the network between hardware and software, the introduction of new vendors in the value chain who drive price pressure on incumbents and the widespread automation and cloud economics.
These opportunities bring also new risks. While they open up the supply chain with the introduction of new vendors, they also allow new actors to enter the value chain, either to substitute and dominate legacy vendors or create new control points (see the orchestrator wars I have been mentioning in previous posts). The additional risk is that the cost of entry into telco becomes lower for cloud hyperscalers as the technology to run telco networks transitions from proprietary closed ecosystem to open source, cloud environment.

The last pivot point is another telco technology that is very specifically aimed at creating a cloud environment in telco networks: Edge computing. It creates a cloud layer that can allow the provision, reservation and consumption of telco connectivity, together with cloud computing. As a greenfield environment, it is a natural entry point for cloud operators and new vendors alike to enter the telco ecosystem.

Facebook, Google, AWS, Microsoft and others seem to think that 5G and edge computing in particular will be more cloud than telco. Network operators try to resist this claim by building a 5G network that will be a fully integrated connectivity and computing experience, complementary to public clouds, but different enough to command a premium, a different value chain and operator control.

In which direction will the market move? This and more in my report and workshop Edge computing and Hybrid Clouds 2020.

Wednesday, April 15, 2020

The business cases of edge computing

Edge computing has been a trendy topic over the last year. Between AWS' launch of Outpost, Microsoft continuous effort with Azure Stack, Nvidia's specialized gaming version EGX platform or even Google's Anthos toolkit, much has been said about this market segment.
Network operators, on their side, have announced plans for deployments in many geographies, but with little, in terms of specific new services, revenues or expected savings.
Having been in the middle of several of these discussions, between vendors, hyperscalers, operators and systems integrators, I am glad to share a few thoughts on the subject.

Hyperscalers have not been looking at edge computing as a new business line, but rather as an extension of their current cloud capabilities. There are many use cases today that cannot be fully satisfied by the cloud, due to a combination of high / variable latency, network congestion, and lack of visibility / control of the last mile connectivity.
For instance, anyone having tried to edit online a diagram in powerpoint office 365 or to play a massive multiplayer online cloud game will recognize how maddeningly frustrating the experience can be.
Edge computing, as in bringing cloud resources closer physically to where data is consumed / produced makes sense to reduce latency and the need for on-premise dedicated resources. From an hyperscaler's perspective, edge computing can be as simple as dropping a few racks within an operator data center to allow their clients to use and configure new availability zones with specific performance and price.

Network operators, who have largely lost the cloud computing wholesale market to the hyperscalers, see edge computing as an opportunity to reintegrate the value chain, by offering cloud-like services at incomparable performance. Ideally, they would like to capture and retain the emerging high performance cloud computing market that will be sure to spurn a new category of digital services ranging from AI-augmented manufacturing and automation, autonomous vehicles, ubiquitous facial and object recognition and compute-less smart devices. The problem is that a lot of these hypothetical services are ill-defined, far fetched and futuristic, which does not inspire sufficient confidence to the CFO that has to approve multi - billion capital expenditure to get going.
But surely, if the likes of Microsoft, Intel, HP, Google, Facebook, AWS are investing in Edge Computing there must be something there? What are the operators missing to make the edge computing business case positive?

Mobile or multi access edge computing?

Many operators looked at edge computing first from the perspective of mobile. The mobile edge computing business case remains extremely uncertain. There is no identified use case that justifies the cost to deploy thousands of mini compute capabilities at mobile site in the short term. Even with the perspective of upgrading networks to 5G, the added cost of mobile edge computing is hard to justify.

If not in mobile site, the best bet to deploy edge computing for network operators is in Central Offices (CO). These facilities house commuting platforms for copper, fiber, DSL connectivity and are overdue for upgrade in many markets. The deployment of fibre, the copper replacement and the evolution of technology from GPON to XGS-PON and PON2 are excellent windows of opportunity to replace aging single-purposes infrastructure with open, software defined computing capability.
The level of investment for central offices retooling into mini data centers is orders of magnitude lower than the mobile case, and is completely flexible. It is not necessary to change all central offices, one can proceed by deploying one per state / province / region and increase capillary as business dictates.

What use cases would make edge computing's business case positive for operators in that scenario?


  • First, for operators who have triple and quadruple play, the opportunity to replace aging dedicated infrastructure for TV, fixed telephony, enterprise and residential connectivity by cloud native software defined open architecture provides interesting savings and benefits. The savings are realized from the separation of hardware and software, the sourcing and deployment of white boxes and the opex savings of separating control plane and centralizing and automating service elasticity. 
  • Additional savings are to be had with the deployment at the edges of content / video caches. Particularly for TV providers who see the increase of on-demand and unicast live traffic, positioning edge caches allow up to 80% savings in content transport. This is likely to increase with the upgrade from HD to 4K, 8K and growth in AR/VR.
  • At last, for operators who are deploying their CPE in their customers' home, edge computing allows to simplify and reduce drastically the cost of these equipments and their deployment / maintenance by bringing the services into the Central Office and reducing the need for storage and compute in the CPE.

While the savings can be significant in the long run, no operator can justify substituting existing infrastructure if its amortization is not fully realized on these premises alone. This is why some operators are looking at these scenarios only for greenfield fiber deployments or as part of massive copper replacement windows.
Savings alone in all likeliness won't allow operators to deploy at the rhythm necessary to counter hyperscalers. New revenues streams can also be captured with the deployment of edge computing.

  • For consumers, it is likely that the lowest hanging fruit in the short term is in gaming. While hyperscalers and gaming companies have launched their own cloud gaming services, their success has been limited due to the poor online experience. The most successful game franchises are Massive Multiplayer Online. They pitch dozens of players against each other and require a very controlled latency between all players for a fair and enjoyable gameplay. Only operators can provide controlled latency if they deploy gaming servers at the edge. Without a full blown gaming service, providing game caching at the edge can drastically reduce the download time for games, updates and patches, which increases dramatically player's service satisfaction.
  • For enterprise users, edge computing has dozens of use cases that can be implemented today that are proven to provide superior experience compared to the cloud. These services range from high performance cloud storage, to remote desktop, to video surveillance and recognition.
  • Beyond operators-owned services, the largest opportunity is certainly the enablement of edge as a service (EaaS), allowing cloud developers to use edge resources as specific cloud availability zones.
The main issue at this stage, for operators is to decide whether to let hyperscalers deploy their infrastructure in their network, capturing most of the value of these emerging services but also opening up a new line of revenue from wholesale hosting or trying to play it alone, as an operator or a federation of them, deploying a telco cloud infrastructure and building the necessary platform to resell edge compute resource in their networks.

This and a lot more use cases and business cases in my online workshop and report Edge Computing 2020.

Tuesday, January 28, 2020

Announcing telco edge computing and hybrid cloud report 2020


As I am ramping up towards the release of my latest report on telco edge computing and hybrid cloud, I will be releasing some previews. Please contact me privately for availability date, price and conditions.

In the 5 years since I published my first report on the edge computing market, it has evolved from an obscure niche to a trendy buzzword. What originally started as a mobile-only technology, has evolved into a complex field, with applications in IT, telco, industry and clouds. While I have been working on the subject for 6 years, first as an analyst, then as a developer and network operator at Telefonica, I have noticed that the industry’s perception of the space has polarized drastically with each passing year.

The idea that telecom operators could deploy and use a decentralized computing fabric throughout their radio access has been largely swept aside and replaced by the inexorable advances in cloud computing, showing a capacity to abstract decentralized computing capacity into a coherent, easy to program and consume data center as a service model.

As often, there are widely diverging views on the likely evolution of this model:

The telco centric view

Edge computing is a natural evolution of telco networks. 
5G necessitates robust fibre based backhaul transport.With the deployment of fibre, it is imperative that the old copper commuting centers (the central offices) convert towards multi-purposes mini data centers. These are easier and less expensive to maintain than their traditional counterpart and offer interesting opportunities to monetize unused capacity.

5G will see a new generation of technology providers that will deploy cloud native software-defined functions that will help deploy and manage computing capabilities all the way to the fixed and radio access network.

Low-risk internal use cases such as CDN, caching, local breakout, private networks, parental control, DDOS detection and isolation, are enough to justify investment and deployment. The infrastructure, once deployed, opens the door to more sophisticated use cases and business models such as low latency compute as a service, or wholesale high performance localized compute that will extend the traditional cloud models and services to a new era of telco digital revenues.

Operators have long run decentralized networks, unlike cloud providers who favour federated centralized networks, and that experience will be invaluable to administer and orchestrate thousands of mini centers.

Operators will be able to reintegrate the cloud value chain through edge computing, their right-to-play underpinned by the control and capacity to program the last mile connectivity and the fact that they will not be over invested by traditional public clouds in number and capillarity of data centers in their geography (outside of the US).

With its long-standing track record of creating interoperable decentralized networks, the telco community will create a set of unifying standards that will make possible the implementation of an abstraction layer across all telco to sell edge computing services irrespectively of network or geography.

Telco networks are managed networks, unlike the internet, they can offer a programmable and guaranteed quality of service. Together with 5G evolution such as network slicing, operators will be able to offer tailored computing services, with guaranteed speed, volume, latency. These network services will be key to the next generation of digital and connectivity services that will enable autonomous vehicles, collaborating robots, augmented reality and pervasive AI assisted systems.

The cloud centric view:

Edge computing, as it turns out is less about connectivity than cloud, unless you are able to weave-in a programmable connectivity. 
Many operators have struggled with the creation and deployment of a telco cloud, for their own internal purposes or to resell cloud services to their customers. I don’t know of any operator who has one that is fully functional, serving a large proportion of their traffic or customers, and is anywhere as elastic, economic, scalable and easy to use as a public cloud.
So, while the telco industry has been busy trying to develop a telco edge compute infrastructure, virtualization layer and platform, the cloud providers have just started developing decentralized mini data centers for deployment in telco networks.

In 2020, the battle to decide whether edge computing is more about telco or about cloud is likely already finished, even if many operators and vendors are just arming themselves now.

Edge computing, to be a viable infrastructure-based service that operators can resell to their customers needs a platform, that allows third party to discover, view, reserve and consume it on a global scale, not operator per operator, country per country, and it looks like the telco community is ill-equipped for a fast deployment of that nature.


Whether you favour one side or the other of that argument, the public announcements in that space of AT&T, Amazon Web Services, Deutsche Telekom, Google, Microsoft, Telefonica, Vapour.io and Verizon – to name a few –will likely convince you that edge computing is about to become a reality.

This report analyses the different definitions and flavours of edge computing, the predominant use cases and the position and trajectory of the main telco operators, equipment manufacturers and cloud providers.

Tuesday, March 15, 2016

Mobile QoE White Paper




Extracted from the white paper "Mobile Networks QoE" commissioned by Accedian Networks. 

2016 is an interesting year in mobile networks.  Maybe for the first time, we are seeing tangible signs of evolution from digital services to mobile-first. As it was the case for the transition from traditional services to digital, this evolution causes disruptions and new behavior patterns in the ecosystem, from users to networks, to service providers.
Take for example social networks. 47% of Facebook users access the service exclusively through mobile and generate 78% of the company’s ad revenue. In video streaming services, YouTube sees 50% of its views on mobile devices and 49% Netflix’ 18 to 34 years old demographics watch it on mobile.
This extraordinary change in behavior causes unabated traffic growth on mobile networks as well a changes in the traffic mix. Video becomes the dominant use that pervades every other aspect of the network. Indeed, all involved in the mobile value chain have identified video services as the most promising revenue opportunity for next generation networks. Video services are rapidly becoming the new gold rush.


“Video services are the new gold rush”
Video is essentially a very different animal from voice or even other data services. While voice, messaging and data traffic can essentially be predicted fairly accurately as a function of number and density of subscribers, time of day and busy hour patterns, video follows a less predictable growth. There is a wide disparity in consumption from one user to the other, and this is not only due to their viewing habits. It is also function of their device screen size and resolution, the network that they are using and the video services they access. The same video, viewed on a social sharing site on a small screen or on full HD or at 4K on a large screen can have a 10 -20x impact on the network, for essentially the same service.


Video requires specialized equipment to manage and guarantee its quality in the network, otherwise, when congestion occurs, there is a risk that it consumes resources effectively denying voice, browsing, email and other services fair (and necessary) access to the network.
This unpredictable traffic growth results in exponential costs for networks to serve the demand.
As mobile becomes the preferred medium to consume digital content and services, Mobile Network Operators (MNOs), whose revenue was traditionally derived from selling “transport,” see their share squeezed as subscribers increasingly value content and have more and more options in accessing it. The double effect of the MNOs’ decreasing margins and increasing costs forces them to rethink their network architecture.
New services, on the horizon such as Voice and Video over LTE (VoLTE & ViLTE), augmented and virtual reality, wearable and IoT, automotive and M2M will not be achievable technologically or economically with the current networks.

Any architecture shift must not simply increase capacity; it must also improve the user experience. It must give the MNO granular control over how services are created, delivered, monitored, and optimized. It must make best use of capacity in each situation, to put the network at the service of the subscriber. It must make QoE — the single biggest differentiator within their control — the foundation for network control, revenue growth and subscriber loyalty.
By offering exceptional user experience, MNOs can become the access provider of choice, part of their users continuously connected lives as their trusted curator of apps, real-time communications, and video.


“How to build massively scalable networks while guaranteeing Quality of Experience?”

As a result, the mobile industry has embarked on a journey to design tomorrow’s networks, borrowing heavily from the changes that have revolutionized enterprise IT departments with SDN (Software Defined Networking) and innovating with 5G and NFV (Networks Functions Virtualization) for instance. The target is to emulate some of the essential attributes of innovative service providers such as Facebook, Google and Netflix who have had to innovate and solve some of the very same problems.


QoE is rapidly becoming the major battlefield upon which network operators and content providers will differentiate and win consumers’ trust.  Quality of Experience requires a richly instrumented network, with feedback telemetry woven through its fabric to anticipate, detect, measure any potential failure.

Wednesday, June 10, 2015

Google's MVNO - Project Fi is disappointing

A first look at Google's MVNO to launch in the US on the Sprint and T-Mobile networks reveals itself a little disappointing (or a relief if you are a network operator). I had chronicled the announcement of the launch from Mobile World Congress and expected much more disruption in services and pricing than what is announced here.

The MVNO, dubbed project Fi, is supposed to launch shortly and you have to request an invitation to get access to it (so mysterious and exciting...).

At first glance, there is little innovation in the service. The Google virtual network will span two LTE networks from different providers (but so is Virgin's in France for instance) and will also connect "seamlessly" to the "best" wifi hotspot. It will be interesting to read the first feedback on how the device selects effectively the best signal from these three options and how dynamically that selection occurs. Handover mid call or mid data sessions are going to be an interesting use case, Google assures you that the transition will be "seamless".

On the plus side, Google has really taken a page from Iliad's free disruptive service launched in France and one-time rumored to acquire T-Mobile US. See here the impact their pricing strategy  has had on the French telecommunications market.
  1. Fi Basic service comes with unlimited US talk and text, unlimited international text and wifi tethering for $20 per month.
  2. The subscriber is supposed to set a monthly data budget, whereas he selects a monthly amount and prepays 10$ per GB. At the end of the month, the amount of unused data is credited back for 1c / MB towards the following month. The user can change their budget on a monthly basis. Only cellular data is counted towards usage, not wifi. That's simple, easy to understand and after a little experimentation, will feel very natural.
  3. No contract, no commitment (except that you have to buy a 600+$ Nexus phone).
  4. You can send and receive all cellular texts and calls using Google hangouts on any device.
  5. Data roaming is same price as domestic but... see drawbacks

Here are, in my mind, the biggest drawbacks with the service as it is described.
  1. The first big disappointment is that the service will run initially only on Google's Nexus 6. I have spoken at length on the dangers and opportunities of a fully vertical delivery chain in wireless networks and Google at first seems to pile up the drawbacks (lack of device choice) with little supposed benefits (where is the streamlined user experience?).
  2. "Project Fi connects you automatically to free, open wifi networks that do not require any action to get connected". I don't know you, but I don't think I have ever come across one of these mysterious hotspots in the US. Even Starbucks or MC Donald free hot spots require to accept terms and conditions and the speed is usually lower than LTE. 
  3. Roaming data speed limited to 256 kbps! really? come on, we are in 2015. Even if you are not on LTE, you can get multi Mbps on 3G / HSPA. Capping at that speed means that you will not be streaming video, tethering or using data hungry apps (Facebook, Netflix, Periscope, Vine, Instagram...). What's the point, at this stage, better say roaming only on wifi  (!?).
In conclusion, it is an interesting "project", that will be sure to make some noise and have an impact on the already active price war between operators in the US, but on the face of it, there is too little innovation and too much hassle to become a mass market proposition. Operators still have time to figure out new monetization strategies for their services, but more than ever, they must choose between becoming wholesaler or value added providers.

Wednesday, March 18, 2015

OTT as MVNO… or MNOs



This is an excerpt from my latest report "Video monetization 2015" .

OTT providers on their side might have some slightly different plans and views from mobile network operators. Most of them have built a business predominantly digital, based on internet-based delivery and had had to navigate the intricacies of creating an ecosystem (content creation, aggregation, distribution,…) and a business model (free, freemium, ad sponsored, hybrid, subscription, sponsored…) for the internet. 

This effort has resulted in partnerships and value chains, where content delivery is a little part of the value and when third parties like CDN can’t provide suitable or economical service levels, they are replaced by homegrown solutions, as illustrated by Netflix and Google’s caching strategy.

As a result, I believe that Google’s SVP products Sundar Pichai’s announcement at mobile world congress 2015 is likely to be a sea change. The company has decided to put rumors of becoming an MVNO to bed by integrating vertically the value chain one step further. The company will launch a MVNO service in the US, probably on Sprint and/or T-Mobile networks, blending cellular and wi-fi coverage. It starts to look increasingly like the dystopian future described here.

It is very likely that Google being who they are, will be looking at extending their services to mobile in a very different fashion than a mobile network operator. One can muse that in all likeliness, a Google subscriber (!?) with an Android device on YouTube or G+ is unlikely to pay for minutes of voice or Megabytes of data. It is likely that this first attempt to translate the very basics of mobile network economics into an ad sponsored model will have a very disruptive and durable effect on the whole value chain.

If you remember, this is not the only initiative that Google has with mobile networks. Since 2013, the company has been exploring the possibility to build and operate wireless networks in Southeast Asia and sub-Saharan Africa. If you put this with the recent announcement that Telstra in Australia, Vodafone in New Zealand and Telefonica in South America have all agreed to participate in live trials of the Loon project, it is likely that Google will look at being increasingly involve in cellular networks. The project now supports LTE and balloons can stay up for about 6 months. 

Driving the nail farther in operator’s coffins, Mark Zuckergerg at the same show was advocating for Facebook’s initiative internet.org that is promoting free mobile internet access in emerging countries. The rationale here is that free internet promotes usage, which promotes engagement, which promotes new revenues. Current experiments in Millicom Paraguay or Tanzania, saw increases of data users by the tune of 30% and 10x increase in smartphone sales.


All in all, OTT providers have fundamentally different view of services and value different things than mobile network operators. The reconciliation of these views and the emergence of a new coherent business model will be painful but necessary.

More on the subject, as well as strategies from OTT and mobile network operators to monetize video in "Video monetization 2015". 

Monday, October 27, 2014

HTTP 2.0, SPDY, encryption and wireless networks

I had mused, three and half years ago, at the start of this blog, that content providers might decide to encrypt and tunnel traffic in the future in order to retain control of the user experience.

It is amazing that wireless browsing is becoming increasingly the medium of choice for access to the internet, but the technology it relies on is still designed for fixed, high capacity, lossless, low latency networks. One would think that one would design a technology for its primary (and most challenging) use case and adapt it for more generous conditions instead of the other way around... but I am ranting again.

We are now definitely seeing this prediction accelerate since Google introduced SPDY and proposed it as default for HTTP 2.0.
While HTTP 2.0 latest draft is due to be completed this month, many players in the industry are silently but definitely committing resources to the battle.

SPDY, in its current version does not enhance and in many cases, decreases user experience in wireless networks. Its implementation of TCP lets it too dependant on round trip time, which in turns creates race conditions in lossy networks. SPDY can actually contribute to congestion rather than reduce it in wireless networks.

On one side content providers are using net neutrality arguments to further their case for the need for encryption. They are conflating security (NSA leaks...), privacy (apple cloud leaks) and net neutrality (equal, and if possible free access to networks) concerns.

On the other side, network operators, vendors are trying to argue that net neutrality does not mean not intervening, that the good of the overall users is subverted when some content providers and browser/client vendors use aggressive and predatory tactics to monopolize bandwidth in the name of QoE.

At this point, things are still fairly fluid. Google is proposing that most / all traffic be encrypted by default, while network operators are trying to introduce the concept of trusted proxies that can decrypt / encrypt under certain conditions and user's ascent.

Both these attempts are short-sighted and doomed to fail in my mind and are the result of aggressive strategies to establish market dominance.

In a perfect world, the device, network and content provider negotiate service quality based on device capabilities, subscriber data plan, network capacity and content quality. Technologies such as adaptive bit rate could have been tremendously efficient here, but the operating word in the previous sentence is "negotiate", which assumes collaboration, discovery and access to relevant information to take decisions.

 In the current state of affair, adaptive bit rate is often times corrupted in order to seize as much network bandwidth as possible, which results in devices and service providers aggressively competing for bits and bytes.
Network operators tend to either try to improve or control user experience by deploying DPI, transparent caches, pacing technology, traffic shaping engines, video transcoding, etc...

Content providers assume that highest quality of content (HD for video for instance) equals maximum experience for subscriber and therefore try and capture as much network resource as possible to deliver it. Browser / apps / phone manufacturers also assume that more speed equals better user experience, therefore try to commandeer as much capacity as possible. The flaw here is the assumption that the optimum is the product of many maxima self regulated by an equal and fair apportioning of resources. This shows a complete ignorance of how networks are designed, how they operate and how traffic flows through these networks.

This behaviour leads to a network where all resources are perpetually in contention and all end-points vie for priority and maximum resource allocation. From this perspective one can understand that there is no such thing as "net neutrality" at least not in wireless networks. When network resources are over-subscribed, decisions are taken as to who gets more capacity, priority, speed... The question becomes who should be in position to make these decisions. Right now, the laissez-faire approach to net neutrality means that the network is not managed, it is subjected to traffic. When in contention, resources are managing traffic based on obscure rules in load balancers, routers, base stations, traffic management engines... This approach is the result of lazy, surface thinking. Net neutrality should be the opposite of non intervention. Its rules should be applied equally to networks, devices / apps/browsers and content providers if what we want to enable is fair and equal access to resources.

Now, who said access to wireless should be fair and equal? Unless the networks are nationalized and become government assets, I do not see why private companies, in a competitive market couldn't manage their resources in order to optimize their utilization.

If we transport ourselves in a world where all traffic becomes encrypted overnight, networks lose the ability to manage traffic beyond allowing / stopping and fixing high level QoS metrics to specific services. That would lead to network operators being forced to charge exclusively for traffic. At this point, everyone has to pay per byte transmitted. The cost to users would become prohibitive as more and more video of higher resolution flow through the networks. It would mean also that these video providers could asphyxiate the other services... More importantly, it would mean that the user experience would become the fruit of the fight between content providers; ability to monopolize network capacity, which would go again any "net neutrality" principle. A couple of content providers could dominate not only service but the access to these service as well.

The best rationale against this scenario is commercial. Advertising is the only common business model that supports pay TV and many web services today. The only way to have an efficient, high CPM ad model in wireless is to make it relevant and contextual. The only that is going to happen is if the advertising is injected as close to the user as possible. That means collaboration. Network operators cannot provide subscriber data to third party, so they have to exploit and anonymize it themselves. Which means encryption, if needed must occur after ad insertion, which need to occur at the network edge.

The most optimally  commercially efficient model for all parties involved is through collaboration and advertising, but current battle plans show adversarial models, where obfuscation and manipulation are used to reduce opponents margin of maneuver. Complete analysis and scenario in my video monetization report here.