Showing posts with label business case. Show all posts
Showing posts with label business case. Show all posts

Friday, August 16, 2024

Rant: Why do we need 6G anyway?


I have to confess that, even after 25 years in the business, I am still puzzled by the way we build mobile networks. If tomorrow we were to restart from scratch, with today's technology and knowledge of the market, we would certainly design and deploy them in a very different fashion.

Increasingly, mobile network operators (MNOs) have realized that the planning, deployment and management of the infrastructure is a fundamentally different business than the development and commercialization of the associated connectivity services. They follow different investment and amortization cycle and have very different economic and financial profiles. For this reason, investors value network infrastructure differently from digital services and many MNOs have decided to start separating their fibre, antennas, radio assets from their commercial operation.

This has resulted in a flurry of splits, spin off, divestiture and the growth of tower and infrastructure specialized companies. If we follow this pattern to its logical conclusion, looking at the failed economics of 5G and the promises of 6G, one has to wonder whether we are on the right path.

Governments keep treating spectrum as a finite, exclusive resource, whereas as private networks and unlicensed spectrum demand is increasing, it is clear that there is a cognitive dissonance in the economic model. If 5G's success was predicated on enterprise, industries and verticals connectivity and if these organizations have needs that cannot be satisfied by the public networks, why would MNOs spend so much money on a spectrum that is unlikely to bring additional revenue? The consumer market does not need another G until new services and devices emerge that mandate different connectivity profiles. Metaverse was a fallacy, autonomous vehicles, robots... are in their infancy and workaround the lack of connectivity adequacy by keeping their compute and sensors on device, rather than at the edge.

As the industry prepares for 6G and its associated future hype and non sensical use cases and fantastical services, one has to wonder how can we stop designing networks for use cases that never emerge as dominant, forcing redesigns and late adaptation. Our track record as an industry is not great there. If you remember, 2G was designed for voice services. Texting was the unexpected killer app. 3G was designed for Push to talk over Cellular, believe it or not (remember SIP and IMS...) and picture messaging early browsing were successful. 4G was designed for Voice over LTE (VoLTE) and video / social media were the key services. 5G was supposed to be designed for enterprise and industry connectivity but failed to deliver so far (late implementation of slicing and 5G Stand Alone). So... what do we do now?

First, the economic model has to change. Rationally, it is not economically efficient for 4 or 5 MNOs to buy spectrum and deploy their separate networks to cover the same population. We are seeing more and more network sharing agreements, but we must go further. In many countries, it makes more sense to have a single neutral infrastructure operator, including the cell sites, radio, the fiber backhaul even edge data centers / central offices all the way but not including the core. This neutral host can have an economic model based on wholesale and the MNOs can focus on selling connectivity products.

Of course, this would probably suppose some level of governmental and regulatory overhaul to facilitate this model. Obviously, one of the problems here is that many MNOs would have to transfer assets and more importantly personnel to that neutral host, which would undoubtedly see much redundancy from 3 or 4 teams to one. Most economically advanced countries have unions protecting these jobs, so this transition is probably impossible unless a concerted effort to cap hires / not renew retirement departures / retrain people is effected over many years...

The other part of the equation is the connectivity and digital services themselves. Let's face it, connectivity differentiation has mostly been a pricing and bundling exercise to date. MNOs have not been overly successful with the creation and sale of digital services, the emergence of social media, video streaming services having occupied most of the consumer's interest. On the enterprise's side a large part of the revenue is related to the exploitation of the last mile connectivity, with the sale of secure private connections on public networks in the form of MPLS first then SD-WAN to SASE and cloud interconnection as the main services. Gen AI promises to be the new shining beacon of advanced services, but in truth, there is very little there in the short term in terms of differentiation for MNOs. 

There is nothing wrong with being a very good, cost effective, performant utility connectivity provider. But most markets can probably accommodate only one or two of these. Other MNOs, if they want to survive, must create true value in the form of innovative connectivity services. This supposes not only a change of mindset but also skill set. I think MNOs need to look beyond the next technology, the next G and evolve towards a more innovative model. I have worked on many of these, from the framework to the implementation and systematic creation of sustainable competitive advantage. It is quite different work from standards and technology evolution approach favored by MNOs but necessary for these seeking to escape the utility model.

In conclusion, 6G or technological improvements in speed, capacity, coverage, latency... are unlikely to solve the systemic economical and differentiation problem for MNOs unless more effort is put on service innovation and radical infrastructure sharing.

Wednesday, April 15, 2020

The business cases of edge computing

Edge computing has been a trendy topic over the last year. Between AWS' launch of Outpost, Microsoft continuous effort with Azure Stack, Nvidia's specialized gaming version EGX platform or even Google's Anthos toolkit, much has been said about this market segment.
Network operators, on their side, have announced plans for deployments in many geographies, but with little, in terms of specific new services, revenues or expected savings.
Having been in the middle of several of these discussions, between vendors, hyperscalers, operators and systems integrators, I am glad to share a few thoughts on the subject.

Hyperscalers have not been looking at edge computing as a new business line, but rather as an extension of their current cloud capabilities. There are many use cases today that cannot be fully satisfied by the cloud, due to a combination of high / variable latency, network congestion, and lack of visibility / control of the last mile connectivity.
For instance, anyone having tried to edit online a diagram in powerpoint office 365 or to play a massive multiplayer online cloud game will recognize how maddeningly frustrating the experience can be.
Edge computing, as in bringing cloud resources closer physically to where data is consumed / produced makes sense to reduce latency and the need for on-premise dedicated resources. From an hyperscaler's perspective, edge computing can be as simple as dropping a few racks within an operator data center to allow their clients to use and configure new availability zones with specific performance and price.

Network operators, who have largely lost the cloud computing wholesale market to the hyperscalers, see edge computing as an opportunity to reintegrate the value chain, by offering cloud-like services at incomparable performance. Ideally, they would like to capture and retain the emerging high performance cloud computing market that will be sure to spurn a new category of digital services ranging from AI-augmented manufacturing and automation, autonomous vehicles, ubiquitous facial and object recognition and compute-less smart devices. The problem is that a lot of these hypothetical services are ill-defined, far fetched and futuristic, which does not inspire sufficient confidence to the CFO that has to approve multi - billion capital expenditure to get going.
But surely, if the likes of Microsoft, Intel, HP, Google, Facebook, AWS are investing in Edge Computing there must be something there? What are the operators missing to make the edge computing business case positive?

Mobile or multi access edge computing?

Many operators looked at edge computing first from the perspective of mobile. The mobile edge computing business case remains extremely uncertain. There is no identified use case that justifies the cost to deploy thousands of mini compute capabilities at mobile site in the short term. Even with the perspective of upgrading networks to 5G, the added cost of mobile edge computing is hard to justify.

If not in mobile site, the best bet to deploy edge computing for network operators is in Central Offices (CO). These facilities house commuting platforms for copper, fiber, DSL connectivity and are overdue for upgrade in many markets. The deployment of fibre, the copper replacement and the evolution of technology from GPON to XGS-PON and PON2 are excellent windows of opportunity to replace aging single-purposes infrastructure with open, software defined computing capability.
The level of investment for central offices retooling into mini data centers is orders of magnitude lower than the mobile case, and is completely flexible. It is not necessary to change all central offices, one can proceed by deploying one per state / province / region and increase capillary as business dictates.

What use cases would make edge computing's business case positive for operators in that scenario?


  • First, for operators who have triple and quadruple play, the opportunity to replace aging dedicated infrastructure for TV, fixed telephony, enterprise and residential connectivity by cloud native software defined open architecture provides interesting savings and benefits. The savings are realized from the separation of hardware and software, the sourcing and deployment of white boxes and the opex savings of separating control plane and centralizing and automating service elasticity. 
  • Additional savings are to be had with the deployment at the edges of content / video caches. Particularly for TV providers who see the increase of on-demand and unicast live traffic, positioning edge caches allow up to 80% savings in content transport. This is likely to increase with the upgrade from HD to 4K, 8K and growth in AR/VR.
  • At last, for operators who are deploying their CPE in their customers' home, edge computing allows to simplify and reduce drastically the cost of these equipments and their deployment / maintenance by bringing the services into the Central Office and reducing the need for storage and compute in the CPE.

While the savings can be significant in the long run, no operator can justify substituting existing infrastructure if its amortization is not fully realized on these premises alone. This is why some operators are looking at these scenarios only for greenfield fiber deployments or as part of massive copper replacement windows.
Savings alone in all likeliness won't allow operators to deploy at the rhythm necessary to counter hyperscalers. New revenues streams can also be captured with the deployment of edge computing.

  • For consumers, it is likely that the lowest hanging fruit in the short term is in gaming. While hyperscalers and gaming companies have launched their own cloud gaming services, their success has been limited due to the poor online experience. The most successful game franchises are Massive Multiplayer Online. They pitch dozens of players against each other and require a very controlled latency between all players for a fair and enjoyable gameplay. Only operators can provide controlled latency if they deploy gaming servers at the edge. Without a full blown gaming service, providing game caching at the edge can drastically reduce the download time for games, updates and patches, which increases dramatically player's service satisfaction.
  • For enterprise users, edge computing has dozens of use cases that can be implemented today that are proven to provide superior experience compared to the cloud. These services range from high performance cloud storage, to remote desktop, to video surveillance and recognition.
  • Beyond operators-owned services, the largest opportunity is certainly the enablement of edge as a service (EaaS), allowing cloud developers to use edge resources as specific cloud availability zones.
The main issue at this stage, for operators is to decide whether to let hyperscalers deploy their infrastructure in their network, capturing most of the value of these emerging services but also opening up a new line of revenue from wholesale hosting or trying to play it alone, as an operator or a federation of them, deploying a telco cloud infrastructure and building the necessary platform to resell edge compute resource in their networks.

This and a lot more use cases and business cases in my online workshop and report Edge Computing 2020.

Thursday, May 5, 2016

MEC: The 7B$ opportunity

Extracted from Mobile Edge Computing 2016.
Table of contents



Defining an addressable market for an emerging product or technology is always an interesting challenge. On one hand, you have to evaluate the problems the technology solves and their value to the market, and on the other hand, appreciate the possible cost structure and psychological price expectations from the potential buyer / users.

This warrants a top down and bottoms up approach to look at how the technology can contribute or substitute some current radio and core networks spending, together with a cost based review of the potential physical and virtual infrastructure. [...]

The cost analysis is comparatively easy, as it relies on well understood current cost structure for physical hardware and virtual functions. The assumptions surrounding the costs of the hardware has been reviewed with main x86 based hardware vendors. The VNFs pricing relies on discussions with large and emerging telecom equipment vendors for standard VNFs such as EPC, IMS, encoding, load balancers, DPI… price structure. Traditional telco professional services, maintenance and support costs are apportioned and included in the calculations.

The overall assumption is that MEC will become part of the fabric of 5G networks and that MEC equipment will cover up to 20% of a network (coverage or population) when fully deployed.
The report features total addressable market, cumulative and incremental for MEC equipment vendors and integrator, broken down by CAPEX / OPEX, consumer, enterprises and IoT services.
It then provides a review of operators opportunities and revenue model for each segment.


Monday, April 25, 2016

Mobile Edge Computing 2016 is released!



5G networks will bring extreme data speed and ultra low latency to enable Internet of Things, autonomous vehicles, augmented, mixed and virtual reality and countless new services.

Mobile Edge Computing is an important technology that will enable and accelerate key use cases while creating a collaborative framework for content providers, content delivery networks and network operators. 

Learn how mobile operators, CDNs, OTTs and vendors are redefining cellular access and services.

Mobile Edge Computing is a new ETSI standard that uses latest virtualization, small cell, SDN and NFV principles to push network functions, services and content all the way to the edge of the mobile network. 


This 70 pages report reviews in detail what Mobile Edge Computing is, who the main actors are and how this potential multi billion dollar technology can change how OTTs, operators, enterprises and machines can enable innovative and enhanced services.

Providing an in-depth analysis of the technology, the architecture, the vendors's strategies and 17 use cases, this first industry report outlines the technology potential and addressable market from a vendor, service provider and operator's perspective.

Table of contents, executive summary can be downloaded here.

Wednesday, May 13, 2015

Mobile video monetization: the need for a mediation layer

Extracted from my latest report, mobile video monetization 2015

[...] What is clear from my perspective, is that the stabilization of the value chain for monetizing video  content in mobile networks is unlikely to happen quickly without an interconnect / mediation layer. OTT and content providers are increasingly collaborating, when it comes to enabling connections and to zero rate data traffic; but monetization plays involving advertising, sponsoring, price comparison, recommendation, geo-localized segmented offering, is really in its infancy.

Publishers are increasing their inventory, announcers are targeting mobile screens, but network operators still have no idea how to enable this model in a scalable manner, presumably because many OTT whose model is ad-dependant are not willing yet to share that revenue without a well-defined value.

Intuitively, there are many elements that today reside in an operator’s network that would enrich and raise the value of ad models in in a mobile environment. Whether performance or impression driven, advertising relies on contextualization for engagement. A large part of that context could/should be whether the user is on wifi, on cellular network, whether he’s at home, work or in transit, whether he is a prepaid or postpaid subscriber, how much data or messaging is left in  its monthly allotment, whether the cell he is in is congested, or whether he is experiencing impairments because he is far from the antenna or because he is being throttled because he is close to the end of his quota,  whether he is roaming or in his home network… The list goes on and on in term of data points that can enrich or prevent a successful engagement in a mobile environment.

On the network front, understanding whether a content is an ad or not, whether it is sponsored or not, whether it is performance or impression-measured, whether it can be modified, replaced or removed at all from a delivery would be tremendously important to categorize and manage traffic accurately.

Of course, part of the problem is that no announcer, content provider, aggregator or publisher want to have to cut deals with the 600+ mobile network operators and the 800+ MVNO  individually if they do not have to.

Since there is no standard API to really exchange these data in a meaningful, yet anonymized fashion, the burden resides on the parties to, on a case by case basis, create the basis for this interaction, from a technical and commercial standpoint. This is not scalable and won’t work fast enough for the market to develop meaningfully.
This is not the first time a similar problem occurred in mobile networks, and whether about data network or messaging interconnection, roaming, or inter-network settlements, IPX and interconnect companies have emerged to facilitate the pain of mediating traffic, settlements between networks.

There is no reason that a similar model shouldn’t work for connecting mobile networks, announcers and OTT providers in a meaningful clearing house type of partnership. There is no technical limitation here, it just needs a transactional engine separating control plane from data plane integrated with ad networks, IPX and  a meaningful API to  carry on the control plane subscriber together with session information both ways (from the network to the content provider and vice versa). Companies who could make this happen could be traditional IPX providers such as Syniverse, but it is more likely that company with more advertising DNA such as Opera, Amazon or Google would be better bets. [...]

Tuesday, May 5, 2015

NFV world congress: thoughts on OPNFV and MANO

I am this week in sunny San Jose, California at the NFV World Congress where I will chair Thursday the stream on Policy and orchestration - NFV management.
My latest views on SDN / NFV implementation in wireless networks are published here.

The show started today with a mini-summit on OPNFV, looking at the organization's mission, roadmap and contribution to date.

The workshop was well-attended, with over 250 seats occupied and a good number of people standing in the back. On the purpose of OPNFV, it feels that the organization is still trying to find its mark a little bit, hesitating between being a transmission belt between ETSI NFV and open source implementation projects and graduating to a prescriptive set of blueprints for NFV implementations in wireless networks.

If you have trouble following, you are not the only one. I am quite confused myself. I thought OpenStack had a mandate to create source code for managing cloud network infrastructure and that NFV was looking at managing service in a virtualized fashion, which could sit on premises, clouds and hybrid environments. While NFV does not produce code, why do we need OPNFV for that?

Admittedly, the organization is not necessarily deterministic in its roadmap, but rather works on what its members feel is needed. As a result, it has decided that its first release, code-named ARNO will be supporting KVM as hypervisor environment and will feature an OpenStack architecture underpinned by an OpenDaylight-based SDN controller. ARNO should be released "this spring" and is limited in its scope as a first attempt to provide an example of a carrier-grade ETSI NFV-based source code for managing a SDN infrastructure. Right now, ARNO is focused on VIM (Virtual Infrastructure Management), and since the full MANO is not yet standardized and it is felt it is too big a chunk to look at for a first release, it will be part of a later requirement phase. The organization is advocating pushing requirements and bug resolution upstream (read to other open source communities) to make the whole SDN / NFV more "carrier-grade".

This is where, in my mind the reasoning breaks down. There is a contradiction in terms and intent here. On one hand, OPNFV advocates that there should not be separate branches within implementation projects such as OpenStack for instance for carrier specific requirements. Carrier-grade being the generic analogy to describe high availability, scalability and high performance. The rationale is that it could be beneficial to the whole OpenStack ecosystem. On the other hand, OPNFV seems to have been created to implement and test primarily NFV-based code for carrier environment. Why do we need OPNFV at all if we can push these requirements within OpenStack and ETSI NFV? The organization feels more like an attempt to supplement or even replace ETSI NFV by an opensource collaborative project that would be out of ETSI's hands.

More importantly, if you have been to OpenStack meeting, you know that you are probably twice as likely to meet people from the banking, insurance, media, automotive industry as from the telecommunications space. I have no doubt that theoretically, everyone would like more availability, scalability, performance, but practically, the specific needs of each enterprise segment rarely means they are willing to pay for over-engineered networks. Telco carrier-grade was born from regulatory pressure to provide a public infrastructure service, many enterprises wouldn't know what to do with the complications and constraints arising from these.

As a result, I personally have doubts for the success of the Telcos and forums such as OPNFV to influence larger groups such as OpenStack to deliver a "carrier-grade" architecture and implementation. I think that Telco operators and vendors are a little confused by open source. They essentially treat it as a standard, submitting change requests, requirements, gap analysis while not enough is done (by the operators community at least) to actually get their hands dirty and code. The examples of AT&T, Telefonica, Telecom Italia and some others are not in my mind reflective of the industry at large.

If ETSI were more effective, service orchestration in MANO would be the first agenda item, and plumbing such as VIM would be delegated to more advanced groups such as OpenStack. If a network has to become truly elastic, programmable, self reliant and agile, in a multi vendor environment, then MANO is the brain and it has to be defined and implemented by the operators themselves. Otherwise, we will see Huawei, Nokialcatelucent, Ericsson, HP and others become effectively the app store of the networks (last I checked, it did not work very well for operators when Apple and Android took control of that value chain...). Vendors have no real incentive to make orchestration open and to fulfill the vendor agnostic vision of NFV.


Thursday, June 26, 2014

LTE World Summit 2014

This year's 10th edition of the conference, seems to have found a new level of maturity. While VoLTE, RCS, IMS are still subjects of interest, we seem to be past the hype at last (see last year), with a more pragmatic outlook towards implementation and monetization. 

I was happy to see that most operators are now recognizing the importance of managing video experience for monetization. Du UAE's VP of Marketing, Vikram Chadha seems to get it:
"We are transitioning our pricing strategy from bundles and metering to services. We are introducing email, social media, enterprise packages and are looking at separating video from data as a LTE monetization strategy."
As a result, the keynotes were more prosaic than in the past editions, focusing on cost of spectrum acquisitions and regulatory pressure in the European Union preventing operators to mount any defensible position against the OTT assault on their networks. Much of the agenda of the show focused on pragmatic subjects such as roaming, pricing, policy management, heterogeneous networks and wifi/cellular handover. Nothing obviously earth shattering on these subjects, but steady progress, as the technologies transition from lab to commercial trials and deployment. 

As an example, there was a great presentation by Bouygues Telecom's EVP of Strategy Frederic Ruciak highlighting the company's strategy for the launch of LTE in France, A very competitive market, and how the company was able to achieve the number one spot in LTE market share, despite being the "challenger" number 3 in 2 and 3G.

The next buzzword on the hype cycle to point its head is NFV with many operator CTOs publicly hailing the new technology as the magic bullet that will allow them to "launch services in days or weeks rather than years". I am getting quite tired of hearing that rationalization as an excuse for the multimillion investments made in this space, especially when no one seems to know what these new services will be. Right now, the only arguable benefit is on capex cost containment and I have seen little evidence that it will pass this stage in the mid term. Like the teenage sex joke, no one seems to know what it is, but everybody claims to be doing it. 
There is still much to be resolved on this matter and that discussion will continue for some time. The interesting new positioning I heard at the show is appliance vendors referring to their offering as PNF (as in physical) in contrast and as enablers for VNF. Although it sounds like a marketing trick, it makes a lot of sense for vendors to illustrate how NFV inserts itself in a legacy network, leading inevitably to a hybrid network architecture. 

The consensus here seems to be that there are two prevailing strategies for introduction of virtualized network functions. 

  1. The first one, "cap and grow" sees existing infrastructure equipments being capped beyond a certain capacity and little by little complemented by virtualized functions, allowing incremental traffic to find its way on the virtualized infrastructure. A variant might be "cap and burst" where a function subject to bursts traffic is dimensioned on physical assets to the mean peak traffic and all exceeding traffic is diverted to a virtualized function. 
  2. The second seems to favour the creation of vertical virtualized networks for market or traffic segments that are greenfield. M2M and VoLTE being the most cited examples. 

Both strategies have advantages and flaws that I am exploring in my upcoming report on "NFV & virtualization in mobile networks 2014". Contact me for more information.



Tuesday, April 15, 2014

Mobile video OTT opportunity interview

This is the video interview that was shot at the Monetizing OTT conference I chaired in London last month.



Questions answered:
How has the mobile video market evolved in recent years?
How is OTT changing the value chain and revenue opportunity?
Role of operators in the value chain?
Biggest challenge for operators to develop their own OTT service?
Differences between US & EU markets?



Tuesday, October 29, 2013

I want my... I want my HBO part II

Our second story this week is local. Canada's regulator, the CRTC (Canadian Radio-television Telecommunications Commission), has started consultations to un-bundle TV channels for cable and satellite payTV. In essence, the regulator asserts that TV bundling of channels might be consumer-adverse and that forcing someone to pay for basic cable/satellite + digital channels + movie package + HD + HBO in order to watch Game of Throne is not in the consumer's best interest.
Already both sides of the discussion are engaging in strong arguments. On one hand, it is true that bundling has allowed consumers to discover new content that might not have been their initial choice when selecting their channel line up. In many cases, you are drawn to a new show or a new series by a combination of peer recommendation, preview / advertisement and pure chance. If you did not select the channel to start with, you are removing a large part of the discovery opportunity and I do not see how that can benefit the consumer , the programmer or the announcer. There are already rumors that some US channels could just pull out of Canadian airwaves rather than bend to Canadian pick-and-choose TV which could set a precedent in the US.
On the other hand, TV bundling and prices have gotten out of hand in Canada. It is not unusual to pay over 100$ per month for TV programming which is a high price if, when all is said and done, you watch in average maybe 15 channels in your fixed rotation. Unbundling could certainly cut dramatically in cost to the consumer if they are allowed to select their channel individually rather than in bundles. That's if the MSOs practice a fair price, which is a big if in Canada. Coopetition has been the operating model rather than aggressive competition and prices for unbundled channels could end up being more expensive than bundled, which would end up damaging the consumer's wallet, angering the US right holders and precipitate OTT exodus.

I am in favor of unbundling, but it has to be done in a very careful fashion. It can be beneficial to the consumer only if:

  • It leads to more choice rather than less (US channels need to stay)
  • It is easy to add and remove channels, with no subscription longer than month to month and no penalty.
  • Individual channel selection is not sub-bundled (you have to subscribe to channel x in order to pay extra for channel xHD). I should be able to select only HD channels if I want and not have to carry both SD and HD. It is ok to pay more for HD than SD.
  • Catch up TV, time shifted and a la carte one demand offering could be bundled with individual channels (for instance, AMC SD 1$, HD 1.5$ with on demand 3$...).
  • It is ok to have public programming as a bundle part of the basic subscription package.

In this manner, the successful channels will reap the bulk of the consumer money, but special interest channels will still reach their audience. Channels that have no audience will not be artificially sustained by bundled package. Channels will be able to compete on a series by series, show by show, encouraging original programming and exclusive rights, allowing true competition for premium content.

These two stories illustrate perfectly the risks and opportunities of OTT vs payTV. The business models are not settled yet, major players are announcing new moves every week. It is an exciting time to work in this industry.

Monday, October 28, 2013

I want my... I want my HBO

Two pieces of news caught my eye over the last week that spell in my mind both a vindication and a perfect example of the seismic changes being experienced in OTT and payTV landscapes in North America.

The first story is in the US. As I was mucking around provisioning my new car's hard drive with my eclectic music collection earlier this week, my son stumbled upon an old time favorite and I was elated to witness his discovery of Dire Straits' "Money for Nothing". As we were happily singing I want my.... I want my MTV, I was reminded of a post I wrote two years ago, musing about when HBO would be able to go direct to customer in North America.
It seems my question was answered this week, with Comcast launching a new plan for cord-cutters and cord-nevers, offering Xfinity Streampix, HBO and HBO Go together with broadband for $39.99. A US Comcast customer will be able to watch HBO over the web on their broadband subscription without having to be a cable customer. The FCC (US regulators) mandates that premium channels have to be bundled with basic broadcast, so that's in it as well, but this is a clear tipping point moment. For the first time HBO is going head to head with Netflix, going pure OTT. As I am moving to my new house next month, This had me rethink my TV strategy and I am certainly going to wait and see what trickle downs in Canada. Increasingly, I am thinking of upping my broadband subscription and shaving as much as I can if not cutting outright my Cable / Satellite subscription.

The implications are profound and it is a floodgate moment. On one hand, Netflix has now more subscribers than HBO, which prompts Comcast to start the self cannibalization. If you are loosing subscribers, you might as well loose them to yourself and a friendly content provider rather than a competitor. 
You will read about the second story tomorrow.

Wednesday, October 31, 2012

How to monetize mobile video part II

These posts are excerpts from my article in Mobile Europe from October 2012.

The Age Of Video: How Mobile Networks Must Evolve


In 3G, mobile network operators find themselves in a situation where their core network is composed of many complex elements (GGSN, EPC, browsing gateways, proxies, DPI, PCRF…) that are extremely specialized but have been designed with transactional data in mind. Radio access is a scarce resource, with many operators battling with their regulators to obtain more spectrum. The current model to purchase capacity, based on purchasing more base stations, densifying the network is finding its limits. Costs for network build up are even expected to exceed data revenues in the coming years.
On the technical front, some operators are hitting the Shannon’s law, the theoretical limit for spectrum efficiency. Diminishing returns are the rule rather than the exception as the RAN (Radio Access Network) becomes denser for the same available spectrum. Noise and interferences increase.
On the financial front, should an operator follow the demand, it would have to double its mobile data capacity on a yearly basis. The projected revenue increase for data services shows only a CAGR of 20% through 2015. How can operators keep running their business profitably? 
Operationally, doubling capacity every year seems impossible for most networks who look at 3 to 5 years roll out plans. A change of paradigm is necessary.
 Solutions exist and start to emerge. Upgrade to HSPA +, LTE, use smaller cells, changing drastically the pricing structure of the video and social services, network and video optimization, offload part of the traffic to wifi, implement adaptive bit rate, optimize the radio link, cache, use CDNs, imagine new business models with content providers, device manufacturers and operators…

Detect

The main issue is one of network intelligence. Mobile network operators want their network utilization optimized, not minimized. Traffic patterns need to be collected, analyzed, represented so that data and particularly video can be projected, but not at the country, multi-year level as of today. It is necessary to build granular network planning capacity per sector, cell at RAN, Core and Backhaul levels with tools that are video aware. Current DPI and RAN monitoring tools cannot detect video efficiently and analyze it deeply enough to allow for pattern recognition. Additionally, it is necessary to be able to isolate, follow and act on individual video streams on a per subscriber, per service, per property, per CDN level, not simply at the protocol level.
Current mobile network analytics capabilities are mostly inherited from 3G. DPI and traffic management engines rely mostly on protocol analysis and packet categorization to perform their classification and reporting. Unfortunately, in the case of video, this is insufficient. Video takes many forms in mobile networks and is delivered over many protocols (RTSP, RTMP, HTTP, MPEG2TS…). Recognizing these protocols is not enough to be able to perform the necessary next steps. Increasingly, video traffic is delivered over HTTP progressive download. Most current analytics capabilities cannot recognize video as a traffic type today. They rely on url recognition rather than traffic analysis. This leads to issues: how do you differentiate when a user is browsing between YouTube pages from when he is watching a video? How do you discriminate embedded videos in pages? How do you recognize You Tube embedded videos in Facebook? How do you know whether a video is an advertisement or a main programming? How do you know whether a video should be delivered in HD or lower resolution?
It is necessary, in order to categorize and manage video accurately to recognize the video protocol, container, codec, encoding rate, resolution, duration, origin at the minimum to be able to perform pattern recognition.

Measure Experience, not Speed or Size

The next necessary step after identifying and indexing the video traffic is the capacity to grade it from a quality standpoint. As video quality becomes synonymous to network quality in viewers’ mind, mobile network operators must be able to measure and control video quality. Current capabilities in this space are focused on measuring network speed and content size and inferring user satisfaction. This is inadequate
Any hope of monetizing mobile video for mobile network operators beyond byte accounting relies on being able to reliably grade video content in term of quality. This quality measurement is the cornerstone to provide subscribers with the assurance that the content they view is conform to the level of quality they are entitled to. It is also necessary for network operators to establish baseline with content providers and aggregators who view content quality as one of the main elements of pricing.
A uniform Quality of Experience (QoE) measurement standard is necessary for the industry to progress. Today, there is no valid QoE metric for mobile networks, leaving mobile operators relying on sparse proprietary tools, often derived or created for broadcast and professional video, wholly inadequate for mobile networks.  Mobile network operators must be able to measure the QoE per video, subscriber, session, sector, cell, origin, CDN if they want to create intelligent charging models.

Analyze, Segment Consumers and Traffic

Mobile network operators have been segmenting efficiently their customer base, building packages, bundles and price plans adapted to their targets. In the era of video, it is not enough.
Once traffic is identified, indexed, recognized, it is important to segment the population and usage. Is video traffic mostly from premium content providers and aggregators or from free user generated sites? Are videos watched mostly long form or short form? Are they watched on tablets or smartphones? Are they very viral and watched many times or are consumers more following the long tail? All these data and many others are necessary to understand the nature of subscribers’ consumption and will dictate the solutions that are most appropriate. This is a crucial step to be able to control the video traffic.

Control, Manage

Once video traffic is correctly identified and indexed, it becomes possible to manage it. It is a controversial topic as net neutrality as a concept is far from being settled, at least in the mobile world. My view is that in a model were scarcity (spectrum, bandwidth) and costs are borne by one player (operators) while revenue and demand are borne by others (content providers and subscribers), net neutrality is impractical and anti-competitive. Unlike in fixed network, where quasi-unlimited capacity and low entry costs allow easy introduction of content and services, mobile networks’ cost structures and business models are managed systems where demand outgrows capacity and therefore negate equal access to resources. For instance, no one is talking about net neutrality in the context of television.  I believe that operators will be able to discriminate traffic and offer models based on subscribers and traffic differentiation, many already can. It is just a recognition that today, with current setup, traffic gets degraded naturally as demand grows and DPI and traffic management engine are already providing means to shape and direct traffic to everyone’s best interest. No one could think of networks where P2P file sharing traffic could go unchecked and monopolize the network capacity.
Additionally, all videos are not created equal. There are different definitions, sizes, encoding rates. There are different qualities. Some are produced professionally, with big budgets, some are user generated. Some are live, some are file based. Some are downloaded, some are streamed. Some are premium, some are sponsored, some are freemium, some are free… Videos in their diversity bear the key to monetization.
The diversity of videos and their mode of consumption (some prefer to watch HD content in the highest quality, and will prefer download over streaming, others prefer a video that runs uninterrupted, with small load time even with a lesser quality…) is the key to monetization.

Monetize

Mobile network operators must be able to act based on video and subscribers attribute and influence the users’ experience. Being able to divert traffic to other bearers (LTE, Wifi…), to adjust a video quality on the fly are important steps towards creating class of services, not only amongst subscribers but also between content providers.
It is important as well to enable subscribers to select specific quality levels on the fly and to develop the charging tools to provide instant QoE upgrades.
With the capacity to detect, measure, analyze, segment, control and manage, operators can then monetize video. The steps highlighted here provide means for operators to create sophisticated charging models, whereby subscribers, content providers and aggregators are now included in a virtuous value circle.
Operators should explore creating different quality threshold for the video content that transits through their network. It becomes a means to charge subscribers and / or content providers for premium guaranteed quality.

Monday, October 29, 2012

How to monetize mobile video part I


These posts are excerpts from my article in Mobile Europe from October 2012.
Video is a global phenomenon in mobile networks. In less than 3 years, it has exploded, from a marginal use case to dominating over 50% of mobile traffic in 2012.
Mobile networks until 3G, were designed and deployed predominantly for transactional data. Messaging, email, browsing are fairly low impact and lightweight in term of payload and only necessitate speeds compatible with UMTS. Video brings a new element to the equation. Users rarely complain if their text or email arrives late, in fact, they rarely notice. Video provides an immediate feedback. Consumers demand quality and are increasingly assimilating the network’s quality to the video quality.
With the wide implementation of HSPA (+) and the first LTE deployments, together with availability of new attractive smartphones, tablets and ultra book, it has become clear that today’s networks and price structure are ill-prepared to meet these new challenges.

From value chain to value circles: the operators’ broken business model

One of the main reasons why the current models are inadequate to monetize video is the unresolved changes in the value chain. Handset and device vendors have gained much power in the balance lately and many consumers chose first a device or a brand before a network operator. In many cases, subscribers will churn from their current operator if they cannot get access to the latest device. Additionally, device vendors, with the advent of app stores have become content aggregators and content providers, replacing the operators’ traditional value added services.
In parallel, the suppliers of content and services are boldly pushing their consumer relationship to bypass traditional delivery media. These Over-The-Top (OTT) players extract more value from consumers than the access and network providers. This trend accelerates and threatens the fabric itself of the business model for delivery of mobile services.

Mobile video is already being monetized by premium content vendors and aggregators, through subscription, bundling and advertisement. Mobile network operators find themselves excluded from these new value circles overnight while forced to support the burden of the investment. In many cases, this situation is a self-inflicted wound.


Operators have competed fiercely to acquire more subscribers when markets were growing. As mature markets approach saturation, price differentiation became a strong driver to capture and retain subscribers. As 3G was being rolled out in the mid 2000’s, the mobile markets were not yet saturated and mobile network operators business model still revolved around customer acquisition. A favourite tool was the introduction of all-you-can-eat unlimited data plans to accelerate customer acquisition and capture through long term captive contracts. As a result, the customer penetration grew and accelerated with the introduction of smartphones and tablets by 2007. By 2009. Traffic started to grow exponentially.
Data traffic was growing faster than expected: AT&T data traffic grew 80x between 2007 and 2010 and is projected to grow another 10x between 2010 and 2015. Korea Telecom traffic grew 2x in 2010, Softbank (Japan) traffic doubled in 2011, Orange France traffic doubled in 2010 and doubled again in 2011. In 2012, mature operators are trying to acquire smartphone users as it is widely believed that the ARPU (Average Revenue Per User) is much higher (nearly twice) than the one of traditional feature phone subscribers.
The cost to acquire these subscribers is important, as many operators end up subsidizing the devices, and having to significantly increase their network capacity.
At the same time, it appeared that increasingly, consumer data consumptions was changing and that the “bandwidth hogs”, the top 1% that were consuming 30 to 40% of the traffic were now consuming about 20%. They were not consuming less, the average user was consuming a lot more and everyone was becoming a voracious data user.
The price plans devised to make sure the network is fully utilized are backfiring and many operators are now discontinuing all-you-can-eat data plans and subsidizing adoption of limited, capped, metered models.
While 4G is seen as a means to increase capacity, it is also a way for many operators to introduce new charging models and to depart from bundled, unlimited data plans. It is also a chance to redraw the mobile network, to accommodate what is becoming increasingly a video delivery network rather than a voice or data network.


Monday, October 31, 2011

Connexus: Avvasi, BroadHop, CommProve and Spirent Communications

On October 11, Avvasi, BroadHop, CommProve and Spirent Communications announced the creation of Connexus in a press release, an ecosystem for monetizing OTT.


Personally, I am fairly skeptical about the capacity for anyone to monetize free OTT, besides the content owners and aggregators themselves, so I called up Mate Prgin, president and CEO of Avvasi to get a little detail on this new initiative.


"We are all familiar with the take off of video in wireless networks, and how OTT is a large part of this. Optimization techniques have been used today mostly in a defensive manner, to keep costs down and are necessary but really only a band aid.
Today's main issue is is to align revenues with costs. Operator's best asset is the last mile, ensuring connectivity and quality of experience (QOE). It should allow them to monetize this service to announcers and content providers" started Mate.  He agreed, when pressed that in the short term, monetization opportunities will be mostly around premium content and services.
Connexus is an initiative to catalyze and accelerate the process for the creation of a standard that would offer a framework between operators and content owners to trade content delivery revenue vs. QOE guarantees. Last-mile QOE, traffic management, QOE testing, Policy management are all in the scope."This is  not a co-marketing exercise", says Mate. Today, the initiative spearheaded by the 4 founding companies presents blueprint, use cases and roadmap for monetizing OTT, with planned trials and proof of concept early in 2012.


While these documents are available under NDA for these companies' prospects, Connexus is open to new members and is actively talking with 4 new applicants.

While I don't follow 100% some of the premises, I have been a vocal supporter of new standards to be created in the area of traffic management. In my mind, as video becomes business critical and demand outstrips capacity in mobile networks, we need a mechanism to relay congestion and capacity information from the RAN, to the core and the backhaul to enable some meaningful negotiation of network capacity. If in the meantime, it leads to some monetization of the delivery, good for the network operators, but I think we are still very far from the operators being able to guarantee strong SLA-backed QOE to content providers.


This initiative  will need a lot more support from larger names to be effective and provide relevance in the global ecosystem. I also doubt it can succeed without bringing the content owners and aggregators themselves into the discussion. It is a step in the right direction, though and it is good to see companies starting to talk about monetization, rather than savings when it comes to OTT. It will be interesting to follow how operators and large equipment vendors react to Connexus. I am hearing more announcements will follow at Mobile World Congress.

Friday, September 9, 2011

How to charge for video? part 3 - Pros and Cons

Here are the pros and cons from the methods identified in the previous post.



Pros
Cons
Unlimited usage
Customer friendly, good for acquisition and churn reduction
Hard to plan network capacity
Will be a real differentiator in the future
Expensive, if data usage continues doubling on a yearly basis
Fair Limit
Provides some capacity planning
The limit tends to change often, as the ratio of abuser vs. Heavy users goes down.
Hard Cap
No revenue leakage
Not customer friendly
Easy network planning (max capacity needed = max number of users x caps)
Does not allow to capture additional revenue
Hard cap with overage fee:
Can be very profitable with a population that has frequent overage
Many customers complain of the bill shock.
Soft cap
Customer friendly, easy to understand
Not as profitable in the short term
Soft cap with throttling
A better alternative to hard cap in markets where video usage is not yet very heavy
Becomes less and less customer friendly as video traffic increases
Speed capping
Very effective for charging per type of usage and educating customers
Requires sophisticated network (DPI + Charging + subscriber management)
Application bundling
Popular in mature market with high competition, where subscribers become expert at shopping and comparing the different offerings.
Complex requires sophisticated network, requires good understanding of subscriber demographics and usage to maximize revenue
Metered Usage
Very effective way to ensure that capacity planning and revenue are tied
Not very popular, as many subscribers do not understand Megabytes and how 2 minutes of video could “cost” from 1 to 10 times .
Content based charging
Allow sophisticated tariffing that maximizes revenue
Complex requires sophisticated network, requires good understanding of subscriber demographics and usage to maximize revenue. Technology not quite ready.
Time of day charging
For operators who have a “prime time” effect with peaks an order of magnitude higher than average traffic, an effective way to monetize the need to size for peak.
Not very popular. The network is still underutilized most of the time.
Location based charging
Will allow operators with “hot spots” to try and mitigate usage in these zones or at least to finance capacity.
Most subscribers wont accept having to carry a map to understand how much their call/video will cost them.

As with many trends in wireless, it will take a while before the market matures enough to elaborate a technology and a business model that is both user-friendly and profitable for the operators. Additionally, the emergence of over-the-t0p traffic, with now content providers and aggregators selling their services directly to customers, forces the industry to examine charging and tariffing models in a more fundamental fashion.
Revenue sharing, network sharing, load sharing require traditional core network technologies to be exposed to external entities for a profitable model where brands, content owners, content providers and operators are not at war. New collaboration models need to be thought of. Additionally, while the technology has made much progress, the next generation of DPI, PCRF, OSS/BSS will need to step up to allow for these sophisticated charging models.