Showing posts with label business model. Show all posts
Showing posts with label business model. Show all posts

Friday, August 16, 2024

Rant: Why do we need 6G anyway?


I have to confess that, even after 25 years in the business, I am still puzzled by the way we build mobile networks. If tomorrow we were to restart from scratch, with today's technology and knowledge of the market, we would certainly design and deploy them in a very different fashion.

Increasingly, mobile network operators (MNOs) have realized that the planning, deployment and management of the infrastructure is a fundamentally different business than the development and commercialization of the associated connectivity services. They follow different investment and amortization cycle and have very different economic and financial profiles. For this reason, investors value network infrastructure differently from digital services and many MNOs have decided to start separating their fibre, antennas, radio assets from their commercial operation.

This has resulted in a flurry of splits, spin off, divestiture and the growth of tower and infrastructure specialized companies. If we follow this pattern to its logical conclusion, looking at the failed economics of 5G and the promises of 6G, one has to wonder whether we are on the right path.

Governments keep treating spectrum as a finite, exclusive resource, whereas as private networks and unlicensed spectrum demand is increasing, it is clear that there is a cognitive dissonance in the economic model. If 5G's success was predicated on enterprise, industries and verticals connectivity and if these organizations have needs that cannot be satisfied by the public networks, why would MNOs spend so much money on a spectrum that is unlikely to bring additional revenue? The consumer market does not need another G until new services and devices emerge that mandate different connectivity profiles. Metaverse was a fallacy, autonomous vehicles, robots... are in their infancy and workaround the lack of connectivity adequacy by keeping their compute and sensors on device, rather than at the edge.

As the industry prepares for 6G and its associated future hype and non sensical use cases and fantastical services, one has to wonder how can we stop designing networks for use cases that never emerge as dominant, forcing redesigns and late adaptation. Our track record as an industry is not great there. If you remember, 2G was designed for voice services. Texting was the unexpected killer app. 3G was designed for Push to talk over Cellular, believe it or not (remember SIP and IMS...) and picture messaging early browsing were successful. 4G was designed for Voice over LTE (VoLTE) and video / social media were the key services. 5G was supposed to be designed for enterprise and industry connectivity but failed to deliver so far (late implementation of slicing and 5G Stand Alone). So... what do we do now?

First, the economic model has to change. Rationally, it is not economically efficient for 4 or 5 MNOs to buy spectrum and deploy their separate networks to cover the same population. We are seeing more and more network sharing agreements, but we must go further. In many countries, it makes more sense to have a single neutral infrastructure operator, including the cell sites, radio, the fiber backhaul even edge data centers / central offices all the way but not including the core. This neutral host can have an economic model based on wholesale and the MNOs can focus on selling connectivity products.

Of course, this would probably suppose some level of governmental and regulatory overhaul to facilitate this model. Obviously, one of the problems here is that many MNOs would have to transfer assets and more importantly personnel to that neutral host, which would undoubtedly see much redundancy from 3 or 4 teams to one. Most economically advanced countries have unions protecting these jobs, so this transition is probably impossible unless a concerted effort to cap hires / not renew retirement departures / retrain people is effected over many years...

The other part of the equation is the connectivity and digital services themselves. Let's face it, connectivity differentiation has mostly been a pricing and bundling exercise to date. MNOs have not been overly successful with the creation and sale of digital services, the emergence of social media, video streaming services having occupied most of the consumer's interest. On the enterprise's side a large part of the revenue is related to the exploitation of the last mile connectivity, with the sale of secure private connections on public networks in the form of MPLS first then SD-WAN to SASE and cloud interconnection as the main services. Gen AI promises to be the new shining beacon of advanced services, but in truth, there is very little there in the short term in terms of differentiation for MNOs. 

There is nothing wrong with being a very good, cost effective, performant utility connectivity provider. But most markets can probably accommodate only one or two of these. Other MNOs, if they want to survive, must create true value in the form of innovative connectivity services. This supposes not only a change of mindset but also skill set. I think MNOs need to look beyond the next technology, the next G and evolve towards a more innovative model. I have worked on many of these, from the framework to the implementation and systematic creation of sustainable competitive advantage. It is quite different work from standards and technology evolution approach favored by MNOs but necessary for these seeking to escape the utility model.

In conclusion, 6G or technological improvements in speed, capacity, coverage, latency... are unlikely to solve the systemic economical and differentiation problem for MNOs unless more effort is put on service innovation and radical infrastructure sharing.

Monday, May 29, 2023

The RICs - brothers from a different mother?

As you might know, if you have been an erstwhile reader of my previous blogs and posts, I have been a spectator, advocate and sometime actor in telecoms open and disaggregated networks for quite some time.

From my studies on SDN /NFV, to my time with Telefonica, the TIP forum, the ONF or more recently my forays into Open RAN at NEC, I have been trying to understand whether telecom networks could, by adopting cloud technologies and concepts, evolve towards more open and developer friendly entities.

Unfortunately, as you know, we love our acronyms in telecom. It is a way for us to reconcile obtuse technology names into even more impenetrable concepts that only "specialists" understand. So, when you come across acronyms that contain other acronyms, you know that you are dealing with extra special technology.

Today, let's spend some time on the RICs. RIC stands for RAN (Radio Access Network) Intelligent Controller. The RICs are elements of open RAN, as specified by the O-RAN alliance. The O-RAN alliance is a community of telecoms actors looking at opening and disaggregating the RAN architecture, one of the last bastions of monolithic, proprietary naughtiness in telecoms networks. If you want to understand more about why O-RAN was created, please read here.  There are two types of RIC, a non real time and a near real time.

O-RAN logical architecture

The RICs are frameworks with standardized interfaces to each others, and the other elements of the SMO,  the cloud and the RAN. They are supposed to be platforms onto which independent developers would be able to develop RAN specific features, packaged as Apps that would theoretically be deployable on any standard compliant RIC. The Apps are called rAPPs for non RT and xApps for near RT RIC.
Although the RICs share the same last name, they are actually quite different and are more distant cousins than siblings, which makes the rApps and xApps unlikely to be compatible in a multivendor environment.

The non real time RIC and the rApps

The non RT RIC is actually not part of the RAN itself, it is part of the Service Management and Orchestration (SMO) framework. The non-real time aspect means that it is not intended to act on the O-RAN subsystems (Near RT RIC, Centralized Units (CU), Distributed Units (DU), Radio Units (RU)) with a frequency much lower than once per second. Indeed, the non RT RIC can interface with these systems even in the range of minutes or days. 

Its purpose is a combination of an evolution of SON (Self Organizing Networks) and the creation of a vendor-agnostic RAN OSS. SON was a great idea initially. It was intended to provide operators with a means to automate and optimize RAN configuration. Its challenges raised from the difficulty to integrate and harmonize rules across vendors. One of O-RAN's tenets is to promote multi vendor ecosystems, thereby providing a framework for RAN automation. As part of the SMO, the non RT RIC is also an evolution of the proprietary OSS and Element Management Systems, which for the first time provide open interfaces for RAN configuration.

Because of its dual legacy from OSS and SON, and its less stringent integration needs, the non RT RIC has been the first entity to see many new entrant vendors, either from the OSS or the SON or the transport or the cloud infrastructure management communities. 

Because of their non real time nature (and because many non RT RIC and rApps vendors are different from the RAN vendors, the rApps have somewhat limited capabilities in multivendor environments. Most vendors provide visualization / topology / dashboards capabilities and enhancements revolving around neighbouring and handover management.

The near real time RIC and the xApps

The near real time RIC is part of the RAN and comprises a set of functionalities that are, in a traditional RAN implementation part of the feature set of the gNodeB base station. O-RAN has separated these capabilities into a framework, exposing open interfaces to the RAN components, theoretically allowing vendor agnostic RAN automation and optimization. The near real-time would infer sub-second frequency of interaction between the elements. And..here's the rub.

Millisecond adjustments are necessary in the RAN to account for modulation, atmospheric or interference conditions. This frequency requires a high level of integration between the CU, DU and RU to not lose performance. As often in telecoms,  the issue is not with the technology, but rather the business model. O-RAN's objective is to commoditize and disrupt the RAN, which is an interesting proposition for its consumers (the operators) and for new entrants, less so for legacy vendors. The disaggregation of the RAN with the creation of near RT RIC and xApps goes one step further, commoditizing the RU, CU and DU and extracting the algorithmic value and differentiation in the xApps. The problem with disruption is that it only works in mature, entrenched market segments. While traditional RAN might be mature enough for disruption, it is uncertain that open RAN itself has the degree of maturity whereas new entrants in RU, CU and DU would be commoditized by xApps.

For this reason, it is likely that if near RT RIC and xApps are to be successful, only the dominant RU, CU, DU vendors will be able to develop it and deploy it, which will lead to some serious dependencies against vendor independence.

 I am currently working on my next report on Open RAN and RIC and will provide more updates as I progress there.




Wednesday, July 8, 2020

The Lean Telco

As alluded to in my previous posts, I have tweaked the Lean Startup methodology and the Wardley Map model to create value in a telco environment.

Value is a subjective topic but in a Telco context, my efforts have been aimed at creating sustainable growth strategies. Very simply, sustainable growth comes from sustainable differentiation, which stems from the creation and evolution of technological, commercial and operational characteristics that become difficult, expensive and time consuming to emulate from your competition.

Sustainable growth comes from sustainable cost reduction and revenue growth (Duh!).

Sustainable cost reduction can be achieved through drastic cost structure changes. In 2020 Telco, it can be attained through the implementation of a cloud native architecture and principles, underpinned by strategies of network disaggregation, extensive use of open APIs and open network topologies; SDN and control / user plane separation and systematic automation. While these goals are challenging by themselves, particularly in a brownfield legacy telco environment, they are the bare necessary changes for survival. The challenges associated with the organization, skill sets and methodologies to evaluate, test, deploy, purchase and maintain these technologies are even larger.
Every telco is extremely skilled at managing technological and operational risk, through iterative, waterfall evaluation and tests, resulting in deployment of high availability and capacity networks. This methodology has also led to lengthy evaluation periods and deployments. Most vendor will recognize that the sales cycles in telco are over 2 years long and that making any change in a commercial network takes several million of dollar or euros. This has led to an oligopoly where only a handful of specialized vendors are able to sustain economically these drastic processes.
Lately, telcos have been trying to diversify the pool of vendors to increase competition and innovation by promoting open source and open API projects such as open RAN.
While these projects have shown interesting progress, the real cost reduction comes from the change in methodology and processes to take advantage of these more nimble vendors offering.

What I am proposing with Lean Telco is a methodological framework for identifying, evaluating, testing, sourcing, deploying telco products and services that will provide sustainable differentiation with drastically different cost structure than the incumbent versions.
Once you have successfully changed the cost structure of evaluating, buying, deploying and managing telco infrastructure and capacity, you can survive as a high capacity, low overhead provider of connectivity. But if you want to strive and grow, you need to attack the revenue part of the equation. Actually, one would argue should start with growth objectives, and look at cost structure as an optimization challenge.

Growing revenue sustainably, in a telco environment comes from either having more people using your existing services, or using more of them, connect new people or create new services. I have prototyped, tested and launched projects in each of these categories in my last role at Telefonica.

  1. Having more people use your existing products is difficult for telcos, because those products (residential and enterprise mobility, internet, telephony, TV...) are poorly differentiated, since they rely on the same technology from the same vendors. As a result most telcos end up trying to deploy first (5G, SDWAN...) or to claim a performance advantage, usually derived from a superior spectrum or infrastructure investment. The only real differentiation ends up being pricing. This is very expensive and not sustainable.
  2. Having your customers using more of your service does not necessarily lead to more revenue, as bundles and unlimited plans are periodically rolled out to counter internet hyperscalers offering who rely on a different cost structure and revenue model. Again, since these services are mostly the same from one operator to another, differentiation comes from bundling and pricing. This is not sustainable.
  3. Connecting new people / clients is a worthy endeavour, but the last unconnected live mostly in rural, low density areas and selling services to new corporate clients usually mean competing against public cloud offering that are more cost effective and flexible than what most telcos can offer. There are possibility of growth there, but it requires breaking out from the current telco technological framework and a willingness to assemble new value chains.
  4. Creating new services is certainly where there is the most value, if we look at the growth of telephony over internet, video streaming services, social media and social messaging, SDWAN, cloud security... it is also the area with the most uncertainty and risk.

Telcos are not well equipped to manage the risk and uncertainty inherent in the discovery and creation of new services. The methodologies, organization and processes they use is to deliver with absolute certainty a product or utility with zero default to a mass market without variation. This model works well for mature, disciplined technology and vendors, not at all for exploration and innovation.
Too often, some Telcos build an extremely detailed plan, with contingencies. They budget it, staff it, resource it to execute it within a given timeframe, only to discover that the client didn't really want / need / value what was proposed (cf. push to talk, IMS/VoLTE, RCS, private networks...).

Just like in Lean Startup, the methodology I propose allows the progressive liberation of resource and funds as commercial uncertainty is shed by direct client interaction, testing and feedback. In a typical telco environment, the client interaction is at the very end of the process, here we are going to intersperse it throughout the development process to allow pivots, or early termination if the hypothesis are not met.

Trained, mentored and helped by many, I have adapted a few methodologies to enable Telcos to identify, validate, and deploy new services in an agile and cost effective fashion. I call it the Lean Telco Methodology.

How do I create a Lean Telco?
I use Wadley Maps for situational awareness and create a topographical representation of the current environment, which in my area of interest range from telco network virtualization (NFV), orchestration (MANO), cloud native distribution and orchestration (K8, micro service) and hybrid cloud / edge computing (telco private stacks, AWS outpost, MS Azure edge, Google Anthos...). This is not a map until we apply the level of maturity (Genesis / handmade, Custom / solution, Product, Utility) to each of them, as well as their direction and barriers on the horizontal axis. On the vertical axis, instead of using Wardley's traditional visibility method, I use technology stacks such as access, transport, core network, OSS /BSS, orchestration... The purpose of the map is not to be precise or even right, it is to share and compare comprehension of the environment, the players, their direction, velocity and the barriers. This visualization enables a level of shared understanding necessary to strategic discussions and gameplay around permutations and what-if? scenarios.


Once identified priorities and areas of risks / opportunities to investigate, I use the Lean UX framework and Lean Startup methodology to systematically identify potential current problems needing solving, unmet customer needs, unsatisfactory experiences and potential new products / services that customer wouldn't even know or have an opinion about. A series of workshop is usually best to crystalize the ideas. Once identified, they need to be refined into customer centric objectives. Contrary to popular belief, customer centricity is not necessarily going to ask prospective customers about what they think. Most wouldn't have any idea about what to do with 5G, augmented reality or a private network if you asked them. This is where lean UX and empathetic composite models are useful.

Each idea is reviewed by a jury and graded, the jury will define which ideas can make it to the next stage. The ideas are shaped and staffed as independent projects, with dedicated resource, budget and time box. Each project lead has the overall responsibility for moving the project to the next phase and to deliver the results of the current phase to justify additional resource and budget for the next one.
At a high level, the phases are:

  • Ideation - ($5k-10k /1 - 3 months) -the idea is shaped into a project, with central opportunities, areas of innovation, right to play for the company, sustainable differentiating factor, commercial high level opportunity and cost / timing for the next phase.
  • Prototyping - ($20k - 50k / 2 - 6 months) - In this phase a prototype is built, that might or might not incorporate any development or use of technology for the target invention. The idea is just to emulate the resolution of the problem and put it into customers hands as early as possible to identify whether the objectives, assumptions are framed properly and whether the client would value the resolution.
  • Beta - ($300k - $600k / 3 - 6 months) -  once the central problems are identified, and we know the client values their resolution, it is time to create a MVP to prove that it is technically, commercially, organizationally possible to solve that problem and that the value created exceeds the costs.
  • Product - ($1m - $3m / 3 - 6 months) - In this phase, once proven that the solution is possible, it is necessary to prove that the solution will scale and will be deployable with a mature operational and commercial model.
  • Growth - (TBD) This is the phase where the project needs to be commercially and economically sustainable.

Each phase require client interactions, in the form of actual tests in conditions as close as possible to commercial network. Within each phase, we decompose the project into customer centric objectives. Each objective into hypothesis. Each hypothesis into series of experiments that will validate or invalidate the hypothesis. It helps to set clear expectations and success criteria for each of these.
Wardley maps helps again, within each phase understanding what tasks, experiments are more suited for pioneers, settlers or town planners and indeed whether the project lead can adopt this mental posture in this phase or whether someone else needs to take the lead.

The result is a portfolio of new revenue making projects, that are systematically validated by customer feedback, capacity and propensity to pay; together with a robust operational and commercial model. Each project is periodically reviewed and graded, all projects must pass a gate review before the next phase and liberation of funds, which allow a nimble, measured, progressive investment plan, as risks and uncertainty decrease throughout the life of the project.

Wednesday, April 15, 2020

The business cases of edge computing

Edge computing has been a trendy topic over the last year. Between AWS' launch of Outpost, Microsoft continuous effort with Azure Stack, Nvidia's specialized gaming version EGX platform or even Google's Anthos toolkit, much has been said about this market segment.
Network operators, on their side, have announced plans for deployments in many geographies, but with little, in terms of specific new services, revenues or expected savings.
Having been in the middle of several of these discussions, between vendors, hyperscalers, operators and systems integrators, I am glad to share a few thoughts on the subject.

Hyperscalers have not been looking at edge computing as a new business line, but rather as an extension of their current cloud capabilities. There are many use cases today that cannot be fully satisfied by the cloud, due to a combination of high / variable latency, network congestion, and lack of visibility / control of the last mile connectivity.
For instance, anyone having tried to edit online a diagram in powerpoint office 365 or to play a massive multiplayer online cloud game will recognize how maddeningly frustrating the experience can be.
Edge computing, as in bringing cloud resources closer physically to where data is consumed / produced makes sense to reduce latency and the need for on-premise dedicated resources. From an hyperscaler's perspective, edge computing can be as simple as dropping a few racks within an operator data center to allow their clients to use and configure new availability zones with specific performance and price.

Network operators, who have largely lost the cloud computing wholesale market to the hyperscalers, see edge computing as an opportunity to reintegrate the value chain, by offering cloud-like services at incomparable performance. Ideally, they would like to capture and retain the emerging high performance cloud computing market that will be sure to spurn a new category of digital services ranging from AI-augmented manufacturing and automation, autonomous vehicles, ubiquitous facial and object recognition and compute-less smart devices. The problem is that a lot of these hypothetical services are ill-defined, far fetched and futuristic, which does not inspire sufficient confidence to the CFO that has to approve multi - billion capital expenditure to get going.
But surely, if the likes of Microsoft, Intel, HP, Google, Facebook, AWS are investing in Edge Computing there must be something there? What are the operators missing to make the edge computing business case positive?

Mobile or multi access edge computing?

Many operators looked at edge computing first from the perspective of mobile. The mobile edge computing business case remains extremely uncertain. There is no identified use case that justifies the cost to deploy thousands of mini compute capabilities at mobile site in the short term. Even with the perspective of upgrading networks to 5G, the added cost of mobile edge computing is hard to justify.

If not in mobile site, the best bet to deploy edge computing for network operators is in Central Offices (CO). These facilities house commuting platforms for copper, fiber, DSL connectivity and are overdue for upgrade in many markets. The deployment of fibre, the copper replacement and the evolution of technology from GPON to XGS-PON and PON2 are excellent windows of opportunity to replace aging single-purposes infrastructure with open, software defined computing capability.
The level of investment for central offices retooling into mini data centers is orders of magnitude lower than the mobile case, and is completely flexible. It is not necessary to change all central offices, one can proceed by deploying one per state / province / region and increase capillary as business dictates.

What use cases would make edge computing's business case positive for operators in that scenario?


  • First, for operators who have triple and quadruple play, the opportunity to replace aging dedicated infrastructure for TV, fixed telephony, enterprise and residential connectivity by cloud native software defined open architecture provides interesting savings and benefits. The savings are realized from the separation of hardware and software, the sourcing and deployment of white boxes and the opex savings of separating control plane and centralizing and automating service elasticity. 
  • Additional savings are to be had with the deployment at the edges of content / video caches. Particularly for TV providers who see the increase of on-demand and unicast live traffic, positioning edge caches allow up to 80% savings in content transport. This is likely to increase with the upgrade from HD to 4K, 8K and growth in AR/VR.
  • At last, for operators who are deploying their CPE in their customers' home, edge computing allows to simplify and reduce drastically the cost of these equipments and their deployment / maintenance by bringing the services into the Central Office and reducing the need for storage and compute in the CPE.

While the savings can be significant in the long run, no operator can justify substituting existing infrastructure if its amortization is not fully realized on these premises alone. This is why some operators are looking at these scenarios only for greenfield fiber deployments or as part of massive copper replacement windows.
Savings alone in all likeliness won't allow operators to deploy at the rhythm necessary to counter hyperscalers. New revenues streams can also be captured with the deployment of edge computing.

  • For consumers, it is likely that the lowest hanging fruit in the short term is in gaming. While hyperscalers and gaming companies have launched their own cloud gaming services, their success has been limited due to the poor online experience. The most successful game franchises are Massive Multiplayer Online. They pitch dozens of players against each other and require a very controlled latency between all players for a fair and enjoyable gameplay. Only operators can provide controlled latency if they deploy gaming servers at the edge. Without a full blown gaming service, providing game caching at the edge can drastically reduce the download time for games, updates and patches, which increases dramatically player's service satisfaction.
  • For enterprise users, edge computing has dozens of use cases that can be implemented today that are proven to provide superior experience compared to the cloud. These services range from high performance cloud storage, to remote desktop, to video surveillance and recognition.
  • Beyond operators-owned services, the largest opportunity is certainly the enablement of edge as a service (EaaS), allowing cloud developers to use edge resources as specific cloud availability zones.
The main issue at this stage, for operators is to decide whether to let hyperscalers deploy their infrastructure in their network, capturing most of the value of these emerging services but also opening up a new line of revenue from wholesale hosting or trying to play it alone, as an operator or a federation of them, deploying a telco cloud infrastructure and building the necessary platform to resell edge compute resource in their networks.

This and a lot more use cases and business cases in my online workshop and report Edge Computing 2020.

Thursday, May 5, 2016

MEC: The 7B$ opportunity

Extracted from Mobile Edge Computing 2016.
Table of contents



Defining an addressable market for an emerging product or technology is always an interesting challenge. On one hand, you have to evaluate the problems the technology solves and their value to the market, and on the other hand, appreciate the possible cost structure and psychological price expectations from the potential buyer / users.

This warrants a top down and bottoms up approach to look at how the technology can contribute or substitute some current radio and core networks spending, together with a cost based review of the potential physical and virtual infrastructure. [...]

The cost analysis is comparatively easy, as it relies on well understood current cost structure for physical hardware and virtual functions. The assumptions surrounding the costs of the hardware has been reviewed with main x86 based hardware vendors. The VNFs pricing relies on discussions with large and emerging telecom equipment vendors for standard VNFs such as EPC, IMS, encoding, load balancers, DPI… price structure. Traditional telco professional services, maintenance and support costs are apportioned and included in the calculations.

The overall assumption is that MEC will become part of the fabric of 5G networks and that MEC equipment will cover up to 20% of a network (coverage or population) when fully deployed.
The report features total addressable market, cumulative and incremental for MEC equipment vendors and integrator, broken down by CAPEX / OPEX, consumer, enterprises and IoT services.
It then provides a review of operators opportunities and revenue model for each segment.


Monday, April 25, 2016

Mobile Edge Computing 2016 is released!



5G networks will bring extreme data speed and ultra low latency to enable Internet of Things, autonomous vehicles, augmented, mixed and virtual reality and countless new services.

Mobile Edge Computing is an important technology that will enable and accelerate key use cases while creating a collaborative framework for content providers, content delivery networks and network operators. 

Learn how mobile operators, CDNs, OTTs and vendors are redefining cellular access and services.

Mobile Edge Computing is a new ETSI standard that uses latest virtualization, small cell, SDN and NFV principles to push network functions, services and content all the way to the edge of the mobile network. 


This 70 pages report reviews in detail what Mobile Edge Computing is, who the main actors are and how this potential multi billion dollar technology can change how OTTs, operators, enterprises and machines can enable innovative and enhanced services.

Providing an in-depth analysis of the technology, the architecture, the vendors's strategies and 17 use cases, this first industry report outlines the technology potential and addressable market from a vendor, service provider and operator's perspective.

Table of contents, executive summary can be downloaded here.

Tuesday, January 26, 2016

2015 review and 2016 predictions

As is now customary, I try to grade what I was predicting for 2015 and see what panned out and what didn't. I'll share as well what I see for 2016.

Content providers, creators, aggregators:

"They will need to simultaneously maximize monetization options by segmenting their user base into new price plans and find a way to unlock value in the mobile market.While many OTT, particularly social networks and radio/ audio streaming have collaborated and signed deals with mobile network operators, we are seeing also a tendency to increasingly encrypt and obfuscate online services to avoid network operators meddling in content delivery." 
On that front, I think that both predictions held true. I was envisioning encryption to jump from 10 to 30% of overall data traffic and I got that wrong, at least in many mature markets, where Netflix is big in mobile, we see upwards of 50% of traffic being encrypted. I still claim some prediction here, with one of my first post indicating the encryption trend 2 years before it started in earnest.

The prediction about segmentation from pricing as OTT services mature has been also largely fulfilled, with YouTube's 4th attempt, by my count, to launch a paid service. Additionally, the trend about content aggregators investing in original content rights acquisition is accelerating with Amazon gearing up for movie theaters and Netflix outspending traditional providers such as BBC with a combined investment by both company estimated in the 9$Bn range. Soon, we are talking real money.


In 2016, we will see an acceleration of traditional digital services that were originally launched for fixed line internet transitioning to predominantly mobile or mobile only plays. Right now, 47% of Facebook users are exclusively through  mobile and account for 78% of the company's revenue. More than 50% of YouTube views are on mobile devices and the corresponding revenue growth is over 100% year on year. 49% of Netflix' 18 to 34 years old demographics watches the service on mobile devices. We have seen signs with Twitter's vine,  and Periscope as well as Spotify , MTV and Facebook that the battlefield will be on video services.


Network operators: Wholesaler or value providers?

The operators in 2016 are still as confused, as a community as in 2015. They perceive threats from each other, which causes many acquisitions, from OTTs, which causes in equal measure many partnership and ill-advised service launches and from regulatory bodies, which causes lawyers to fatten up at the net neutrality / privacy buffet.
"we will see both more cooperation and more competition, with integrated offering (OTT could go full MVNO soon) and encrypted, obfuscated traffic on the rise". 
We spoke about encryption, the OTT going full MVNO was somewhat fulfilled by Google's disappointing project Fi launch. On the cooperation front, we have seen a flurry of announcements, mostly centered around sponsored data or zero rated subscription services from Verizon, AT&T.
"We will probably also see the first lawsuits from OTT to carriers with respect to traffic mediation, optimization and management. " 
I got that half right. No lawsuit from content providers but heavy fines from regulators on operators who throttle, cap or prioritize content (Sprint, AT&T, ...).

As for digital service providers, network operators are gearing themselves to compete on video services with services such as mobile TV /LTE broadcast (AT&T, EE, Telekom SlovenjeVodafone), events streaming (China Telecom, ), sponsored data / zero rated subscription services (Verizon, T-mobile Binge On, Sprint, AT&T, Telefonica, ...).

"Some operators will seek to actively manage and mediate the traffic transiting through their networks and will implement HTTPS / SPDY proxy to decrypt and optimize encrypted traffic, wherever legislation is more supple."
I got that dead wrong. Despite interest and trials, operators are not ready to go into open battle with OTT just yet. Decrypting encrypted traffic is certainly illegal in many countries
or at the very least hostile and seems to be only expected from government agencies...



Mobile Networks Technology

"CAPEX will be on the rise overall with heterogeneous networks and LTE roll-out taking the lion share of investments. LTE networks will show signs of weakness in term of peak traffic handling mainly due to video and audio streaming and some networks will accelerate LTE-A investments or aggressively curb traffic through data caps, throttles and onerous pricing strategies."
Check and check.
"SDN will continue its progress as a back-office and lab technology in mobile networks but its incapacity to provide reliable, secure, scalable and manageable network capability will prevent it to make a strong commercial debut in wireless networks. 2018 is the likeliest time frame."
I maintain the view that SDN is still too immature for mass deployment in mobile networks, although we have seen encouraging trials moving from lab to commercial, we are still a long way from a business case and technology maturity standpoint before we see a mobile network core or RAN running exclusively or mostly on SDN.
"NFV will show strong progress and first commercial deployments in wireless networks, but in vertical, proprietary fashion, with legacy functions (DPI, EPC, IMS...) translated in a virtualized environment in a mono vendor approach. "
We have seen many examples of that this year with various levels of industry and standard support from Connectem, Affirmed Networks, Ericsson, Cisco and Huawei.

"Orchestration and integration with SDN will be the key investments in the standardization community. The timeframe for mass market interoperable multi vendor commercial deployment is likely 2020."
Orchestration, MANO has certainly driven many initiatives (Telefonica OpenMANO) and acquisitions (Ciena acquired Cyan, for example) and remains the key challenge in 2016 and beyond. SDN NFV will not take off unless there is a programmatic framework to link customer facing services to internal services, to functions, to virtual resources to hardware resources in a multi-vendor fashion. I still maintain 2020 as the probable target for this.

In 2016, the new bit of technology I will investigate is Mobile Edge Computing, the capacity to deploy COTS in the radio network, unlocking virtualized services to be positioned at the network's edge, enabling IoT, automotive, Augmented Reality or Virtual Reality services that require minimal latency to access content even faster.


In conclusion, 2016 shows more than ever signs that the house of cards is about to collapse. Data traffic is increasing fast, video is now dominating every networks and it is just starting. With 4K and then 8k around the corner, without talking about virtual or augmented reality, many of the players in the value chain understand that video is going the next few years' battlefield in mobile, OTT and cloud services. This is why we are seeing so much concentration and pivot strategies in the field. 

What is new is the fact that if mobile was an ongoing concern or barely on the radar for many so-called OTT, it has now emerged as the predominant if not exclusive market segment in revenue. 
This means that more pressure will rain on network operators to offer bandwidth and speed. My reports and workshops show that mobile advertising is not growing fast enough in comparison to the subscribers eyeball moving to mobile screens. This is mostly due to the fact that video services in mobile networks are a pretty low quality service, which will get worse as more subscribers transition to LTE. The key to unlock the value chain will be collaboration between operators and OTT and that will only happen if/when a profitable business model and apportioning of costs is worked out.

At last, my prediction about selfie kills seem to unfortunately have been fulfilled with selfies now killing more people than shark attacks. Inevitably, we have to conclude that in 2016, commercial drones and hoverboards will kill more people than selfies...


That's all folks, see you at MWC next month.

Wednesday, June 24, 2015

Building a mobile video delivery network? part III


Content providers and aggregators have obviously an interest (and in some case a legal obligation) to control the quality of the content they sell to a consumer. Without owning networks outright to deliver the content, they rent capacity, under specific service level agreements to deliver this content with managed Quality of Experience. When the content is delivered over the “free” internet or a mobile network, there is no QoE guarantee. As a result, content providers and aggregators tend to “push the envelope” and grab as much network resource as available to deliver a video stream, in an effort to equate speed and capacity to consumer QoE. This might work on fixed networks, but in mobile, where capacity is limited and variable, it causes congestion.

Obviously, delegating the selection of the quality of the content to a device should be a smart move. Since the content is played on the device, this is where there is the clearest understanding of instantaneous network capacity or congestion. Unfortunately, certain handset vendors, particularly those coming from the consumer electronics world do not have enough experience in wireless IP for efficient video delivery. Some devices for instance will go and grab the highest capacity available on the network, irrespective of the encoding of the video requested. So, for instance if the capacity at connection is 2Mbps and the video is encoded at 1Mbps, it will be downloaded at twice its rate. That is not a problem when the network is available, but as congestion creeps in, this behaviour snowballs and compounds congestion in embattled networks.
As more and more device manufacturers coming from the computing world (as opposed to mobile) enter the market with smartphones and tablets, we see wide variations in the implementation of their native video player.
Consequently, operators are looking at way to control video traffic as a means to maybe be able to monetize it differently in the future. Control can take many different aspects and rely on many technologies ranging from relatively passive to increasingly obtrusive and aggressive.

In any case, the rationale for implementing video control technologies in mobile networks goes beyond the research for the best delivery model. At this point in time, the actors have equal footing and equal interest in preserving users QoE. They have elected to try and take control of the value chain independently. This has resulted in a variety of low level battles, where each side is trying to assert control over the others.
The proofs of these battles are multiple:
  • Google tries to impose VP9 as an alternative to H.265 /HEVC: While the internet giant rationale to provide a royalty-free codec as the next high efficiency codec seems innocuous to some, it is a means to control the value chain. If content providers start to use VP9 instead of H.265, Google will have the means to durably influence the roadmap to deliver video content over the internet.
  • Orange extracts peering fees from Google / YouTube in Africa: Orange as a dominant position for mobile networks and backhaul in Africa and has been able to force Google to the negotiating table and get them to pay peering fee for delivering YouTube over wireless networks. A world’s first.
  • Network operators implement video optimization technologies: In order to keep control of the OTT videos delivered on their networks, network operators have deployed video optimization engine to reduce the volume of traffic, to alleviate congestion or more generally to keep a firmer grip on the type of traffic transiting their networks.
  • Encryption as an obfuscation mechanism: Content or protocol encryption has traditionally been a means to protect sensitive content from interception, reproduction or manipulation. There is a certain cost and latency involved in the encoding and decoding of the content, so it has remained mostly used for premium video. Lately, content providers have been experimenting with the delivery of encrypted video as a means to obfuscate the traffic and stop network operators from interfering with it.
  • Net neutrality debate, when pushed by large content providers and aggregators is oftentimes a proxy for commercial battle. Th economics of the internet have evolved from browsing to streaming and video has disrupted the models significantly. The service level agreements put in place by the distribution chains (CDNs, peering points...) are somewhat inadequate for video delivery.


We could go on and on listing all the ways that content providers and network operators are probing each other’s capacity to remain in control of the user’s video experience. Ultimately, these initiatives are isolated but are signs of large market forces trying to establish dominance over each other. So far, these manoeuvres have reduced the user experience. The market will settle in a more collaborative mode undoubtedly as the current behaviour could lead to mutually assured destruction. The reality is simple. There is a huge appetite for online video. An increasing part of it takes place on mobile devices, on cellular networks. There is money to be made if there is collaboration, the size of the players is too large to establish a durable dominance without vertical integration.

Tuesday, June 23, 2015

Building a mobile video delivery network? part II


Frequently, in my interactions with vendors and content providers alike, the same questions are brought up. Why aren’t content providers better placed to manage the delivery of the content they own rather than network operators? Why are operators implementing transcoding technologies in their networks, when content providers and CDN have similar capabilities and a better understanding of the content they deliver? Why should operators be involved in controlling the quality of a content or service that is not on their network?

In every case, the answer is the same. It is about control. If you look at the value chain of delivering content over wireless networks, it is clear that technology abounds when it comes to controlling the content, its quality, its delivery and its associated services at the device, in the network, in the CDN and at the content provider. Why are all the actors in the delivery chain seemingly hell-bent on overstepping each other’s boundary and wrestle each other’s capacity to influence content delivery?

To answer this question, you need to understand how content used to be sold in mobile networks. Until fairly recently, the only use case of “successful” content being sold on mobile networks was ringtones. In order to personalize your phone, one use to go to their operator’s portal and buy a ringtone to download to one’s device. The ringtones were sold by the operator, charged on one’s wireless bill, provided by an aggregator, usually white-labelled who would receive a percentage of the sale, and then kick back another percentage of their share to the content provider itself who created the ringtone.
That model was cherished by network operators. They had full control of the experience, selecting themselves the content aggregator, in some case the content providers, negotiating the rates from a position of power, and selling to the customer under their brand, in their branded environment, on their bills.

This is a long way from today’s OTT, where content and services are often free for the user, monetized through advertisement or other transparent scheme, with content selected by the user, purchased or sourced directly on the content provider’s site, with no other involvement from the network operator than the delivery itself. These OTT (Over-The-Top) services threaten the network operator’s business model. Voice and messaging are the traditional revenue makers fro operators and are decreasing year over year in revenue, while increasing on volume due to the fierce competition of OTT providers. These services remain hugely profitable for networks and technology has allowed great scalability with small costs increments, promising healthy margins for a long while. Roaming prices are still in many cases extortionate. While some legislators are trying to get users fairer prices, it will be a long time before they disappear altogether.

Data, in comparison, is still uncharted territory. Until recently, the service was not really monetized, used as an appeal product to entice consumers to sign for longer term contracts. This is why so many operators initially launched unlimited data services. 3G, and more recently LTE have seen the latest examples of operators subsidizing data services for customer acquisition.

The growth of video in mobile networks is upsetting this balance though. The unpredictability and natural propensity of video to expand and monopolize network resources makes it a more visible and urgent threat as an OTT service. Data networks have greatly evolved with LTE with better capacity, speed and latency than 3G.  But the price paid to increase network capacity is still in the order of billions of dollars, when one has to take into account spectrum, licenses, real estate and deployment. Unfortunately, the growth in video in term of users, usage and quality outstrips the progress made in transport technology. As a result, when network operators look at video compounded annual growth rate exceeding 70%, they realize that serving the demand will continue to be a costly proposition if they are not able to control or monetize it. This is the crux of the issue. Video, as part of data is not today charged in a very sophisticated manner. It is either sold as unlimited, as a bucket of usage and/or speed. The price of data delivery today will not cover the cost of upgrading network capacity in the future if network operators cannot control better video traffic.

Additionally, both content providers and device vendors have diametrically opposed attitude in this equation. Device manufacturers, mobile network operators and content providers all want to deliver the best user experience for the consumer. The lack of cooperation between the protagonists in the value chain results paradoxically in an overall reduced user experience.


Wednesday, June 10, 2015

Google's MVNO - Project Fi is disappointing

A first look at Google's MVNO to launch in the US on the Sprint and T-Mobile networks reveals itself a little disappointing (or a relief if you are a network operator). I had chronicled the announcement of the launch from Mobile World Congress and expected much more disruption in services and pricing than what is announced here.

The MVNO, dubbed project Fi, is supposed to launch shortly and you have to request an invitation to get access to it (so mysterious and exciting...).

At first glance, there is little innovation in the service. The Google virtual network will span two LTE networks from different providers (but so is Virgin's in France for instance) and will also connect "seamlessly" to the "best" wifi hotspot. It will be interesting to read the first feedback on how the device selects effectively the best signal from these three options and how dynamically that selection occurs. Handover mid call or mid data sessions are going to be an interesting use case, Google assures you that the transition will be "seamless".

On the plus side, Google has really taken a page from Iliad's free disruptive service launched in France and one-time rumored to acquire T-Mobile US. See here the impact their pricing strategy  has had on the French telecommunications market.
  1. Fi Basic service comes with unlimited US talk and text, unlimited international text and wifi tethering for $20 per month.
  2. The subscriber is supposed to set a monthly data budget, whereas he selects a monthly amount and prepays 10$ per GB. At the end of the month, the amount of unused data is credited back for 1c / MB towards the following month. The user can change their budget on a monthly basis. Only cellular data is counted towards usage, not wifi. That's simple, easy to understand and after a little experimentation, will feel very natural.
  3. No contract, no commitment (except that you have to buy a 600+$ Nexus phone).
  4. You can send and receive all cellular texts and calls using Google hangouts on any device.
  5. Data roaming is same price as domestic but... see drawbacks

Here are, in my mind, the biggest drawbacks with the service as it is described.
  1. The first big disappointment is that the service will run initially only on Google's Nexus 6. I have spoken at length on the dangers and opportunities of a fully vertical delivery chain in wireless networks and Google at first seems to pile up the drawbacks (lack of device choice) with little supposed benefits (where is the streamlined user experience?).
  2. "Project Fi connects you automatically to free, open wifi networks that do not require any action to get connected". I don't know you, but I don't think I have ever come across one of these mysterious hotspots in the US. Even Starbucks or MC Donald free hot spots require to accept terms and conditions and the speed is usually lower than LTE. 
  3. Roaming data speed limited to 256 kbps! really? come on, we are in 2015. Even if you are not on LTE, you can get multi Mbps on 3G / HSPA. Capping at that speed means that you will not be streaming video, tethering or using data hungry apps (Facebook, Netflix, Periscope, Vine, Instagram...). What's the point, at this stage, better say roaming only on wifi  (!?).
In conclusion, it is an interesting "project", that will be sure to make some noise and have an impact on the already active price war between operators in the US, but on the face of it, there is too little innovation and too much hassle to become a mass market proposition. Operators still have time to figure out new monetization strategies for their services, but more than ever, they must choose between becoming wholesaler or value added providers.

Wednesday, May 13, 2015

Mobile video monetization: the need for a mediation layer

Extracted from my latest report, mobile video monetization 2015

[...] What is clear from my perspective, is that the stabilization of the value chain for monetizing video  content in mobile networks is unlikely to happen quickly without an interconnect / mediation layer. OTT and content providers are increasingly collaborating, when it comes to enabling connections and to zero rate data traffic; but monetization plays involving advertising, sponsoring, price comparison, recommendation, geo-localized segmented offering, is really in its infancy.

Publishers are increasing their inventory, announcers are targeting mobile screens, but network operators still have no idea how to enable this model in a scalable manner, presumably because many OTT whose model is ad-dependant are not willing yet to share that revenue without a well-defined value.

Intuitively, there are many elements that today reside in an operator’s network that would enrich and raise the value of ad models in in a mobile environment. Whether performance or impression driven, advertising relies on contextualization for engagement. A large part of that context could/should be whether the user is on wifi, on cellular network, whether he’s at home, work or in transit, whether he is a prepaid or postpaid subscriber, how much data or messaging is left in  its monthly allotment, whether the cell he is in is congested, or whether he is experiencing impairments because he is far from the antenna or because he is being throttled because he is close to the end of his quota,  whether he is roaming or in his home network… The list goes on and on in term of data points that can enrich or prevent a successful engagement in a mobile environment.

On the network front, understanding whether a content is an ad or not, whether it is sponsored or not, whether it is performance or impression-measured, whether it can be modified, replaced or removed at all from a delivery would be tremendously important to categorize and manage traffic accurately.

Of course, part of the problem is that no announcer, content provider, aggregator or publisher want to have to cut deals with the 600+ mobile network operators and the 800+ MVNO  individually if they do not have to.

Since there is no standard API to really exchange these data in a meaningful, yet anonymized fashion, the burden resides on the parties to, on a case by case basis, create the basis for this interaction, from a technical and commercial standpoint. This is not scalable and won’t work fast enough for the market to develop meaningfully.
This is not the first time a similar problem occurred in mobile networks, and whether about data network or messaging interconnection, roaming, or inter-network settlements, IPX and interconnect companies have emerged to facilitate the pain of mediating traffic, settlements between networks.

There is no reason that a similar model shouldn’t work for connecting mobile networks, announcers and OTT providers in a meaningful clearing house type of partnership. There is no technical limitation here, it just needs a transactional engine separating control plane from data plane integrated with ad networks, IPX and  a meaningful API to  carry on the control plane subscriber together with session information both ways (from the network to the content provider and vice versa). Companies who could make this happen could be traditional IPX providers such as Syniverse, but it is more likely that company with more advertising DNA such as Opera, Amazon or Google would be better bets. [...]

Tuesday, March 10, 2015

Mobile video 2015 executive summary

As is now traditional, I return from Mobile World Congress with a head full of ideas and views on market evolution, fueled by dozens of meetings and impromptu discussions. The 2015 mobile video monetization report, now in its fourth year, reflects the trends and my analysis of the mobile video market, its growth, opportunities and challenges.

Here is the executive summary from the report to be released this month.

2014 has been a contrasted year for deployments of video monetization platforms in mobile networks. The market in deployments and value has grown, but there has been an unease that has gripped some of its protagonists, forcing exits and pivot strategies, while players with new value proposition have emerged. This transition year is due to several factors.

On the growth front, we have seen the emergence of MVNOs and interconnect / clearing houses as a buying target, together with the natural turnover and replacement of now aging and fully amortized platforms deployed 5/6 years ago.

Additionally, the market leaders upgrade strategies have naturally also created some space for challengers and new entrants. Mature markets have seen mostly replacements and MVNO green field deployments, while emerging markets have added new units in markets that are either too early for 3G or already saturated in 4G. Volume growth has been particularly sustained in Eastern / Central Europe, North Africa, Middle East and South East Asia.

On the other hand, the emergence and growth of traffic encryption, coupled with persisting legal and regulatory threat surrounding the net neutrality debate has cooled down, delayed and in some cases shut down optimization projects as operators are trying to rethink their options. Western Europe and North America have seen a marked slowdown, while South America is just about starting to show interest.

The value of the deals has been in line with last year, after sharp erosions due to the competitive environment. The leading vendors have consolidated their approach, taken on new strategies and overall capitalizing on installed base, while many new deals have gone to new entrants and market challengers.

2014 has also been the first year of a commercial public cloud deployment, which should be followed soon by others. Network function virtualization has captivated many network operators’ imagination and science experiment budget, which has prompted the emergence of the notion of traffic classification and management as a service.

Video streaming, specifically, has shown great growth in 2014, consolidating its place as the fastest growing service in mobile networks and digital content altogether. 2014 and early 2015 have seen many acquisitions of video streaming, packaging, encoding technology company. What is new however, is that a good portion of these acquisitions were not performed by other technology companies but by OTT such as FaceBook and Twitter.

Mobile video advertising is starting to become a “thing” again, as investments, inventory and views show triple digit growth. The trend shows mobile video advertising becoming possibly the single largest revenue opportunity for mobile operators within a 5 years timeframe, but its implementation demands a change in attitude, organization, approach that is alien to most operators DNA. The transformation, akin to a heart transplant will probably leave many dead on the operating table before the graft takes on and the technique is refined, but they might not have much choice, looking at Google’ and Facebook’s announcements at Mobile World Congress 2015.

Will new technologies such as LTE Multicast, for instance, which are due to make their start in earnest this year, promising quality assured HD content, via streaming or download, be able to unlock the value chain? 


The mobile industry is embattled and find itself looking at some great threats to its business model, as the saying goes, those who will survive are not necessarily the strongest, but rather those who will adapt the fastest.

Wednesday, January 14, 2015

2014 review and 2015 predictions

Last year, around this time, I had made some predictions for 2014. Let's have a look at how I fared and I'll risk some opinions for 2015.
Before predictions, though, new year, new web site, check it out at coreanalysis.ca

Content providers, creators, aggregators:

"OTT video content providers are reaching a stage of maturity where content creation / acquisition was the key in the first phase, followed by subscriber acquisition. As they reach critical mass, the game will change and they will need to simultaneously maximize monetization options by segmenting their user base into new price plans and find a way to unlock value in the mobile market." 
On that front, content creation / acquisition still remains a key focus of large video OTT (See Netflix' launch of Marco Polo for $90m). Netflix has reported  $8.9B of content obligations as of September 2014. On the monetization, front, we have also seen signs of maturity, with YouTube experimenting on new premium channels and Netflix charging premium for 4K streaming. HBO has started to break out of its payTV shell and has signed deals to be delivered as online broadband only subscriptions, without cable/satellite.
Netflix has signed a variety of deals with european MSOs and broadband operators as they launched there in 2014.
While many OTT, particularly social networks and radio/ audio streaming have collaborated and signed deals with mobile network operators, we are seeing also a tendency to increasingly encrypt and obfuscate online services to avoid network operators meddling in content delivery.
Both trends will likely accelerate in 2015, with more deals being struck between OTT and network operators for subscription-based zero-rated data services. We will also see in mobile networks the proportion of encrypted data traffic raise from the low 10's to at least 30% of the overall traffic.

Wholesaler or Value provider?


The discussion about the place of the network operator and MSO in content and service delivery is still very much active. We have seen, late last year, the latest net neutrality sword rattling from network operators and OTT alike, with even politicians entering the fray and trying to influence the regulatory debates. This will likely not be setted in 2015. As a result, we will see both more cooperation and more competition, with integrated offering (OTT could go full MVNO soon) and encrypted, obfuscated traffic on the rise. We will probably also see the first lawsuits from OTT to carriers with respect to traffic mediation, optimization and management. This adversarial climate will delay further monetization plays relying on mobile advertisement. Only integrated offering between OTT and carriers will be able to avail from this revenue source.
Some operators will step away from the value provider strategy and will embrace wholesale models, trying to sign as many MVNO and OTT as possible, focusing on network excellence. These strategies will fail as the price per byte will decline inexorably, unable to sustain a business model where more capacity requires more investment for diminishing returns.
Some operators will seek to actively manage and mediate the traffic transiting through their networks and will implement HTTPS / SPDY proxy to decrypt and optimize encrypted traffic, wherever legislation is more supple.

Mobile Networks

CAPEX will be on the rise overall with heterogeneous networks and LTE roll-out taking the lion share of investments. 
LTE networks will show signs of weakness in term of peak traffic handling mainly due to video and audio streaming and some networks will accelerate LTE-A investments or aggressively curb traffic through data caps, throttles and onerous pricing strategies.

SDN will continue its progress as a back-office and lab technology in mobile networks but its incapacity to provide reliable, secure, scalable and manageable network capability will prevent it to make a strong commercial debut in wireless networks. 2018 is the likeliest time frame.

NFV will show strong progress and first commercial deployments in wireless networks, but in vertical, proprietary fashion, with legacy functions (DPI, EPC, IMS...) translated in a virtualized environment in a mono vendor approach. We will see also micro deployments in emerging markets where cost of ownership takes precedence over performance or reliability. APAC will also see some commercial deployments in large networks (Japan, Korea) in fairly proprietary implementations.
Orchestration and integration with SDN will be the key investments in the standardization community. The timeframe for mass market interoperable multi vendor commercial deployment is likely 2020.

To conclude this post, my last prediction is that someone will likely be bludgeoned to death with their own selfie stick, I'll put my money on Mobile World Congress 2015 as a likely venue, where I am sure countless companies will give them away, to the collective exasperation and eye-rolling of the Barcelona population.

That's all folks, see you soon at one of the 2015 shows.