Showing posts with label ATT. Show all posts
Showing posts with label ATT. Show all posts

Monday, December 4, 2023

Is this the Open RAN tipping point: AT&T, Ericsson, Fujitsu, Nokia, Mavenir


The latest publications around Open RAN deliver a mixed bag of progress and skepticism. How to interpret these conflicting information?

A short retrospective of the most recent news:

On the surface, Open RAN seems to be benefiting from a strong momentum and delivering on its promise of disrupting traditional RAN with the introduction of new suppliers, together with the opening of traditional architecture to a more disaggregated and multi vendor model. The latest announcement from AT&T and Ericsson even would point out that the promise of reduced TCO for brownfield deployments is possible:
AT&T's yearly CAPEX guidance is supposed to reduce from a high of ~$24B to about 20B$ per year starting in 2024. If the 14B$ for 5 years spent on Ericsson RAN yield the announced 70% of traffic on Open RAN infrastructure, AT&T might have dramatically improved their RAN CAPEX with this deal.

What is driving these announcements?

For network operators, Open RAN has been about strategic supply chain diversification. The coalescence of the market into an oligopoly, and a duopoly after the exclusion of Chinese vendors to a large number of Western Networks has created an unfavorable negotiating position for the carriers. The business case of 5G relies heavily on declining costs or rather a change in the costs structure of deploying and operating networks. Open RAN is an element of it, together with edge computing and telco clouds.

For operators

The decision to move to Open RAN is mostly not any longer up for debate. While the large majority of brownfield networks will not completely transition to Open RAN they will introduce the technology, alongside the traditional architecture, to foster cloud native networks implementations. It is not a matter of if but a matter of when.
When varies for each market / operator. Operators do not roll out a new technology just because it makes sense even if the business case is favorable. A window of opportunity has to present itself to facilitate the introduction of the new technology. In the case of Open RAN, the windows can be:
  • Generational changes: 4G to 5G, NSA to SA, 5G to 6G
  • Network obsolescence: the RAN contracts are up for renewal, the infrastructure is aging or needs a refresh. 
  • New services: private networks, network slicing...
  • Internal strategy: transition to cloud native, personnel training, operating models refresh
  • Vendors weakness: Nothing better than an end of quarter / end of year big infrastructure bundle discount to secure and alleviate the risks of introducing new technologies

For traditional vendors

For traditional vendors, the innovator dilemma has been at play. Nokia has endorsed Open RAN early on, with little to show for it until recently, convincingly demonstrating multi vendor integration and live trials. Ericsson, as market leader has been slower to endorse Open RAN has so far adopted it selectively, for understandable reasons.

For emerging vendors

Emerging vendors have had mixed fortunes with Open RAN. The early market leader, Altiostar was absorbed by Rakuten which gave the market pause for ~3 years, while other vendors caught up. Mavenir, Samsung, Fujitsu and others offer credible products and services, with possible multi vendors permutations. 
Disruptors, emerging and traditional vendors are all battling in RAN intelligence and orchestration market segment, which promises to deliver additional Open RAN benefits (see link).


Open RAN still has many challenges to circumvent to become a solution that can be adopted in any network, but the latest momentum seem to show progress for the implementation of the technology at scale.
More details can be found through my workshops and advisory services.



Friday, September 18, 2020

Rakuten: the Cloud Native Telco Network

Traditionally, telco network operators have only collaborated in very specific environments; namely standardization and regulatory bodies such as 3GPP, ITU, GSMA...

There are a few examples of partnerships such as Bridge Alliance or BuyIn mostly for procurement purposes. When it comes to technology, integration, product and services development, examples have been rare of one carrier buying another's technology and deploying it in their networks.

It is not so surprising, if we look at how, in many cases, we have seen operators use their venture capital arm to invest in startups that end up rarely being used in their own networks. One has to think that using another operator's technology poses even more challenges.

Open source and network disaggregation, with associations like Facebook's Telecom Infra Project, the Open Networking Foundation (ONF) or the Linux Foundation O-RAN alliance have somewhat changed the nature of the discussions between operators.

It is well understood that the current oligopolistic situation in terms of telco networks suppliers is not sustainable in terms of long term innovation and cost structure. The wound is somewhat self-inflicted, having forced vendors to merge and acquire one another in order to be able to sustain the scale and financial burden of surviving 2+ years procurement processes with drastic SLAs and penalties.

Recently, these trends have started to coalesce, with a renewed interest for operators to start opening up the delivery chain for technology vendors (see open RAN) and willingness to collaborate and jointly explore technology development and productization paths (see some of my efforts at Telefonica with Deutsche Telekom and AT&T on network disaggregation).

At the same time, hyperscalers, unencumbered by regulatory and standardization purview have been able to achieve global scale and dominance in cloud technology and infrastructure. With the recent announcements by AWS, Microsoft and Google, we can see that there is interest and pressure to help network operators achieving cloud nativeness by adopting the hyperscalers models, infrastructure and fabric.

Some operators might feel this is a welcome development (see Telefonica O2 Germany announcing the deployment of Ericsson's packet core on AWS) for specific use cases and competitive environments. 

Many, at the same time are starting to feel the pressure to realize their cloud native ambition but without hyperscalers' help or intervention. I have written many times about how telco cloud networks and their components (Openstack, MANO, ...) have, in my mind, failed to reach that objective. 

One possible guiding light in this industry over the last couple of years has been Rakuten's effort to create, from the ground up, a cloud native telco infrastructure that is able to scale and behave as a cloud, while providing the proverbial telco grade capacity and availability of a traditional network. Many doubted that it could be done - after all, the premise behind building telco clouds in the first place was that public cloud could never be telco grade.

It is now time to accept that it is possible and beneficial to develop telco functions in a cloud native environment.

Rakuten's network demonstrates that it is possible to blend traditional and innovative vendors from the telco and cloud environments to produce a cloud native telco network. The skeptics will say that Rakuten has the luxury of a greenfield network, and that much of its choices would be much harder in a brownfield environment.




The reality is that whether in the radio, the access, or the core, in OSS or BSS, there are vendors now offering cloud native solutions that can be deployed at scale with telco-grade performance. The reality as well is that no all functions and not all elements are cloud native ready. 

Rakuten has taken the pragmatic approach to select from what is available and mature today, identifying gaps with their ideal end state and taking decisive actions to bridge the gaps in future phases.




Between the investment in Altiostar, the acquisition of Innoeye and the joint development of a cloud native 5G Stand Alone Core with NEC, Rakuten has demonstrated vision clarity, execution and commitment to not only be the first cloud native telco, but also to be the premier cloud native telco supplier with its Rakuten Mobile Platform. The latest announcement of a MoU with Telefonica could be a strong market signal that carrieres are ready to collaborate with other carriers in a whole new way.


Tuesday, January 28, 2020

Announcing telco edge computing and hybrid cloud report 2020


As I am ramping up towards the release of my latest report on telco edge computing and hybrid cloud, I will be releasing some previews. Please contact me privately for availability date, price and conditions.

In the 5 years since I published my first report on the edge computing market, it has evolved from an obscure niche to a trendy buzzword. What originally started as a mobile-only technology, has evolved into a complex field, with applications in IT, telco, industry and clouds. While I have been working on the subject for 6 years, first as an analyst, then as a developer and network operator at Telefonica, I have noticed that the industry’s perception of the space has polarized drastically with each passing year.

The idea that telecom operators could deploy and use a decentralized computing fabric throughout their radio access has been largely swept aside and replaced by the inexorable advances in cloud computing, showing a capacity to abstract decentralized computing capacity into a coherent, easy to program and consume data center as a service model.

As often, there are widely diverging views on the likely evolution of this model:

The telco centric view

Edge computing is a natural evolution of telco networks. 
5G necessitates robust fibre based backhaul transport.With the deployment of fibre, it is imperative that the old copper commuting centers (the central offices) convert towards multi-purposes mini data centers. These are easier and less expensive to maintain than their traditional counterpart and offer interesting opportunities to monetize unused capacity.

5G will see a new generation of technology providers that will deploy cloud native software-defined functions that will help deploy and manage computing capabilities all the way to the fixed and radio access network.

Low-risk internal use cases such as CDN, caching, local breakout, private networks, parental control, DDOS detection and isolation, are enough to justify investment and deployment. The infrastructure, once deployed, opens the door to more sophisticated use cases and business models such as low latency compute as a service, or wholesale high performance localized compute that will extend the traditional cloud models and services to a new era of telco digital revenues.

Operators have long run decentralized networks, unlike cloud providers who favour federated centralized networks, and that experience will be invaluable to administer and orchestrate thousands of mini centers.

Operators will be able to reintegrate the cloud value chain through edge computing, their right-to-play underpinned by the control and capacity to program the last mile connectivity and the fact that they will not be over invested by traditional public clouds in number and capillarity of data centers in their geography (outside of the US).

With its long-standing track record of creating interoperable decentralized networks, the telco community will create a set of unifying standards that will make possible the implementation of an abstraction layer across all telco to sell edge computing services irrespectively of network or geography.

Telco networks are managed networks, unlike the internet, they can offer a programmable and guaranteed quality of service. Together with 5G evolution such as network slicing, operators will be able to offer tailored computing services, with guaranteed speed, volume, latency. These network services will be key to the next generation of digital and connectivity services that will enable autonomous vehicles, collaborating robots, augmented reality and pervasive AI assisted systems.

The cloud centric view:

Edge computing, as it turns out is less about connectivity than cloud, unless you are able to weave-in a programmable connectivity. 
Many operators have struggled with the creation and deployment of a telco cloud, for their own internal purposes or to resell cloud services to their customers. I don’t know of any operator who has one that is fully functional, serving a large proportion of their traffic or customers, and is anywhere as elastic, economic, scalable and easy to use as a public cloud.
So, while the telco industry has been busy trying to develop a telco edge compute infrastructure, virtualization layer and platform, the cloud providers have just started developing decentralized mini data centers for deployment in telco networks.

In 2020, the battle to decide whether edge computing is more about telco or about cloud is likely already finished, even if many operators and vendors are just arming themselves now.

Edge computing, to be a viable infrastructure-based service that operators can resell to their customers needs a platform, that allows third party to discover, view, reserve and consume it on a global scale, not operator per operator, country per country, and it looks like the telco community is ill-equipped for a fast deployment of that nature.


Whether you favour one side or the other of that argument, the public announcements in that space of AT&T, Amazon Web Services, Deutsche Telekom, Google, Microsoft, Telefonica, Vapour.io and Verizon – to name a few –will likely convince you that edge computing is about to become a reality.

This report analyses the different definitions and flavours of edge computing, the predominant use cases and the position and trajectory of the main telco operators, equipment manufacturers and cloud providers.

Thursday, September 24, 2015

SDN-NFV in wireless 2015/2016 is released




As previously announced, I have been working on my new report "SDN-NFV in wireless 2015/2016" and I happy to announce its release.

The report features primary and secondary research on the state of SDN and NFV standards and open source, together with an analysis of the most advanced network operators and solutions vendors in the space.

You can download the table of contents here.







Released September 2015
130 pages

  • Operators strategy and deployments review: AT&T, China Unicom, Deutsche Telekom, EE, Telecom Italy, Telefonica, ...

  • Vendors strategy and roadmap review: Affirmed networks, ALU, Cisco, Ericsson, F5, HP, Huawei, Intel, Juniper, Oracle, Red Hat...

  • Drivers for SDN and NFV in telecom networks 
  • Public, private, hybrid, specialized clouds 
  • Review of SDN and NFV standards and open source initiatives
    • SDN 
      • Service chaining
      • Apache CloudStack, Microsoft Cloud OS, Red Hat, Citrix CloudPlatform, OpenStack, VMWare vCloud, 
      • SDN controllers (OpenDaylight, ONOS) 
      • SDN protocols (OpenFlow, NETCONF, ForCES, YANG...)
    • NFV 
      • ETSI ISG NFV 
      • OPNFV 
      • OpenMANO 
      • NFVRG 
      • MEF LSO 
      • Hypervisors: VMWare vs. KVM, vs Containers
  • How does it all fit together? 
  • Core and RAN networks NFV roadmap
Terms and conditions: message me at patrick.lopez@coreanalysis.ca

Thursday, January 22, 2015

The future is cloudy: NFV 2020

As the first phase of ETSI ISG NFV wraps up and phase 1's documents are being released, it is a good time to take stock of the progress to date and what lies ahead.

ETSI members have set an ambitious agenda to create a function and service virtualization strategy for broadband networks, aiming at reducing hardware and vendor dependency while creating an organic, automated, programmable network.

The first set of documents approved and published represents a great progress and possibly one of the fastest achievement for a new standard to be rolled out; in only two years. It also highlights how much work is still necessary to make the vision a reality.

Vendors announcements are everywhere, "NFV is a reality, it is happening, it works, you can deploy it in your networks today...". I have no doubt Mobile World Congress will see several "world's first commercial deployment of [insert your vLegacyProduct here]...". The reality is a little more nuanced.

Network Function Virtualization, as a standard does not allow today a commercial deployment out of the box. There are too many ill-defined interfaces, competing protocols, missing API to make it plug and play. The only viable deployment scenario today is from single vendor or tightly integrated (proprietary) dual vendor strategies for silo services / functions. From relatively simple (Customer Premise Equipment) to very complex (Evolved Packet Core), it will possible to see commercial deployments in 2015, but they will not be able to illustrate all the benefits of NFV.

As I mentioned before, orchestration, integration with SDN, performance, security, testing, governance... are some of the challenges that remain today for viable commercial deployment of NFV in wireless networks. These are only the technological challenges, but as mentioned before, operational challenges to evolve and train the workforce at operators is probably the largest challenge.

From my many interactions and interviews with network operators, it is clear that there are several different strategies at play.

  1. The first strategy is to roll out a virtualized function / service with one vendor, after having tested, integrated, trialed it. It is a strategy that we are seeing a lot in Japan or Korea, for instance. It provides a pragmatic learning process towards implementing virtualized function in commercial networks, recognizing that standards and vendors implementations will not be fully interoperable before a few years.
  2. The second strategy is to stimulate the industry by standards and forum participation, proof of concepts, and even homegrown development. This strategy is more time and resource-intensive but leads to the creation of an ecosystem. No big bang, but an evolutionary, organic roadmap that picks and chooses which vendor, network element, services are ready for trial, poc, limited and commercial deployment. The likes of Telefonica and Deutsche Telekom are good examples of this approach.
  3. The third strategy is to define very specifically the functions that should be virtualized, their deployment, management and maintenance model and select a few vendors to enact this vision. AT&T is a good illustration here. The advantage is probably to have a tailored experience that meets their specific needs in a timely fashion before standards completion, the drawback being the flexibility as vendors are not interchangeable and integration is somewhat proprietary.
  4. The last strategy is not a strategy, it is more a wait and see approach. Many operators do not have the resource or the budget to lead or manage this complex network and business transformation. they are observing the progress and placing bets in term of what can be deployed when.
As it stands, I will continue monitoring and chairing many of the SDN / NFV shows this year. My report on SDN / NFV in wireless networks is changing fast, as the industry is, so look out for updates throughout 2015.

Tuesday, October 21, 2014

Report from SDN / NFV shows part II

Today, I would like to address what, in my mind, is a fundamental issue with the expectations raised by SDN/NFV in mobile networks.
I was two weeks ago in Dallas, speaking at SDN NFV USA and the Telco Cloud forum.

While I was busy avoiding bodily fluids with everyone at the show, I got the chance to keynote a session (slides here) with Krish Prabhu, CTO of AT&T labs.

Krish explains that the main driver for the creation and implementation of Domain 2.0 is the fact that the company CAPEX while staggering at $20 billion per year is not likely to significantly increase, while traffic (used here as a proxy for costs) will increase at a minimum of 50% compounded annual growth rate for the foreseeable future.
Krish, then to lament:
"Google is making all the money, we are making all the investment, we have no choice but to squeeze our vendors and re architect the network."
Enter SDN / NFV.
Really? These are the only choices? I am a little troubled by the conclusions here. My understanding is that Google, Facebook, Netflix, in short the OTT providers have usually looked at creating services and value for their subscribers and then, when faced with unique success had to invent new technologies to meet their growth challenges.

Most of the rhetoric surrounding operators' reasons for exploring SDN NFV nowadays seem to be about cost reduction. It is extremely difficult to get an operator to articulate what type of new service they would launch if  their network was fully virtualized and software-defined today. You usually get the salad of existing network functions with the newly adorned "v". vBRAS, vFirewall, vDPI, vCPE, vEPC...
While I would expect these network functions to lend themselves to virtualization, they do not create new services or necessarily more value. A cheaper way to create, deploy, manage a firewall is not a new service.

The problem seems to be that our industry is again tremendously technology-driven, rather than customer-driven. Where are the marketers, the service managers who will invent, for instance, real-time voice translation services by virtualizing voice processing, translation functions in the phone and at the edge? There are hundred of new services to be invented, I am sure SDN NFV will help realize them. I bet Google is closer to enable this use case than most mobile network operators. That is a problem, because operators can still provide value if they innovate, but innovation must come first from services, not technology. We should focus on what first, how after.
End of the rant, more techno posts soon. If you like this, don't forget to buy the report.

Tuesday, September 30, 2014

NFV & SDN 2014: Executive Summary


This Post is extracted from my report published October 1, 2014. 

Cloud and Software Defined Networking have been technologies explored successively in academia, IT and enterprise since 2011 and the creation of the Open Networking Foundation. 
They were mostly subjects of interest relegated to science projects in wireless networks until, in the fall of 2013, a collective of 13 mobile network operators co-authored a white paper on Network Functions Virtualization. This white paper became a manifesto and catalyst for the wireless community and was seminal to the creation of the eponymous ETSI Industry Standardization Group. 
Almost simultaneously, AT&T announced the creation of a new network architectural vision – Domain 2.0, heavily relying on SDN and NFV as building blocks for its next generation mobile network.

Today, SDN and NFV are hot topics in the industry and many companies have started to position themselves with announcements, trials, products and solutions.

 This report is the result of hundreds of interviews, briefings and meetings with many operators and vendors active in this field. In the process, I have attended, participated, chaired various events such as OpenStack, ETSI NFV ISG, SDN & OpenFlow World Congress and became a member at ETSI, OpenStack and TM Forum.
The Open Network Foundation, the Linux Foundation, OpenStack, the OpenDaylight project, IEEE, ETSI, the TM Forum are just a few of the organizations who are involved in the definition, standardization or facilitation of cloud, SDN and NFV. This report provides a view on the different organizations contribution and their progress to date.

Unfortunately, there is no such thing as SDN-NFV today. These are technologies that have overlaps and similarities but stand apart widely. Software Defined Network is about managing network resources. It is an abstraction that allows the definition and management of IP networks in a new fashion. It separates data from control plane and allows network resources to be orchestrated and used across applications independently of their physical location. SDN exhibits a level of maturity through a variety of contributions to its leading open-source contribution community, OpenStack. In its ninth release, the architectural framework is well suited for abstracting cloud resources, but is dominated by enterprise and general IT interests, with little in term of focus and applicability for wireless networks.

Network Function Virtualization is about managing services. It allows the breaking down and instantiation of software elements into virtualized entities that can be invoked, assembled, linked and managed to create dynamic services. NFV, by contrast, through its ETSI standardization group is focused exclusively on wireless networks but, in the process to release its first standard is still very incomplete in its architecture, interfaces and implementation.

SDN can or not comprise NFV elements and NFV can or not be governed or architected using SDN. Many of the Proof of Concepts (PoC) examined in this document are attempting to map SDN architecture and NFV functions in the hope to bridge the gap. Both frameworks can be complementary, but they are both suffering from growing pains and a diverging set of objectives.


The intent is to paint a picture of the state of SDN and NFV implementations in mobile networks. This report describes what has been trialed, deployed in labs, deployed commercially, what are the elements that are likely to be virtualized first, what are the timeframes, what are the strategies and the main players.

Tuesday, September 9, 2014

SDN & NFV part VI: Operators, dirty your MANO!

While NFV in ETSI was initially started by network operators in their founding manifesto, in many instances, we see that although there is a strong desire to force telecoms appliance commoditization, there is little appetite by the operators to perform the sophisticated integration necessary for these new systems to work.

This is, for instance, reflected in MANO, where operators seem to have put back the onus on vendors to lead the effort. 

Some operators (Telefonica, AT&T, NTT…) seem to invest resources not only in monitoring the process but also in actual development of the technology, but by and large, according to my study,  MNOs seem to have taken a passenger seat to NFV implementations efforts. Many vendors note that MNOs tend to have a very hands off approach towards the PoCs they "participate" in, offering guidance, requirements or in some cases, just lending their name to the effort without "getting their hands dirty".

The Orchestrator’s task in NFV is to integrate with OSS/BSS and to manage the lifecycle of the VNFs and NFVI elements. 

It onboards new network services and VNFs and it performs service chaining in the sense that it decides through which VNF, in what order must the traffic go through according to routing rules and templates. 

These routing rules are called forwarding graphs. Additionally, the Orchestrator performs policy management between VNFs. Since all VNFs are proprietary, integrating them within a framework that allows their components to interact is a huge undertaking. MANO is probably the part of the specification that is the least mature today and requires the most work.


Since it is the brain of the framework, failure of MANO to reach a level of maturity enabling consensus between the participants of the ISG will inevitably relegate NFV to vertical implementations. This could lead to a network with a collection of vertically virtualized elements, each having their own MANO, or very high level API abstractions, reducing considerably overall system elasticity and programmability. SDN OpenStack-based models can be used for MANO orchestration of resources (Virtualized Infrastructure Manager) but offer little applicability in the pure orchestration and VNF management field beyond the simplest IP routing tasks.


Operators who are serious about NFV in wireless networks should seriously consider develop their own orchestrator or at the minimum implement strict orchestration guidelines. They could force vendors to adopt a minimum set of VNF abstraction templates for service chaining and policy management.

Friday, July 4, 2014

Q2 multiscreen video news

I use a service to curate and collate my news. Reading through the last few months, I realized that there are so many subjects worthy of comment that a single post wouldn't begin to address them meaningfully. I reserve in-depth analysis of specific trend or topic for my paying clients, so I decided to review and comment on press clippings and announcements as they become available as a way to illustrate the trends, threats and opportunities surrounding our market.
Here is what caught my attention in the last quarter:

Technology: Is 4K the new 3D?

April of course is synonymous with NAB frenzy. Sifting through the trough of announcements at the show, I have noticed a sharp change of direction in vendor’s announcements and claims from last year. When 2013 was all about HEVC H.265, this year seems to be about 4K. While HEVC licensing terms have been agreed and announced by MPEGLA in February, Google’s royalty-free VP9 has captured some support as well, forcing chipset and platform vendors to contemplate fragmentation and multi codec support. Obviously, the battle for codec and protocol will determine who controls the management and delivery of 4K content going forward. In this race, not surprisingly, YouTube is siding with its parent company with VP9 support, while Netflix is adopting H.265. Both companies agree though, and are adamant, that 4K is a lot easier to manage and deliver for OTT properties than for traditional broadcasting payTV providers. Netflix forecasts mass market for 4K to be five years out at the current rate of TV replacements. My opinion is that 4K adoption will suffer from H265/VP9 fragmentation. We will probably see further delays because of the cost of implementing dual protocol stack throughout the delivery chain.

Technology: Cloud, SDN, NFV

At NAB as well, vendors were eager to show off their new acronyms, touting dreams of cloud-based virtualized, self-managed, software-defined networks that would… In reality, most MSOs are still focusing on rolling out HD, improving and automatizing workflows and overall costs reduction. I think we still have 5 years to go before seeing practical, mature implementation of SDN in professional video. Anything else is a science experiment or a proprietary implementation at this point.

Business: MSO to OTT

One of the big news was the announcement from AT&T regarding their intent to invest, jointly with the Chernin Group up to $500million to create SVOD and advertising based web streaming services. Umm... Is it too much or not enough? $500 million goes a long way if you want to build a web streaming service, but it does not seem nearly enough if you want to build an attractive content offering.
HBO, the next day was reported to have signed a multi-year agreement with Amazon. The deal should see some of HBO’s back catalogue series made available to Amazon Prime subscribers. Little by little, HBO nudges the boundaries. You will remember that it signed a deal with Comcast last year to offer HBO Go to Comcast broadband subscribers, without a cable subscription. All signs point that HBO could be a major league OTT provider when they will be ready to cross over.

Business: OTT to Wireless

Almost coincidentally, rumours emerged that Netflix was in discussions with the Vodafone Group to distribute Netflix services on some Vodafone subscriptions. It is likely that these deals will increase in frequency. LTE /4G will see opportunities for cord-never and cord-shavers to access their favourite service and content on cellular networks. That is… if they figure out the charging model (paying 8$ a month for Netflix and $150 in data overage charge to Vodafone wouldn't really work).

Business: OTT to MSO

Netflix has integrated its offering on Atlantic Broadband, Grande Communication and RCN Telecom services set-top boxes, a first in the US after having piloted the concept in Europe. Subscribers will be able to select the service from their PayTV provider. It is an interesting strategy for small MSOs to bundle Netflix in hybrid Set top boxes. It increases reach, provides an attractive offering and good differentiation against market leaders.

Business: M&A

Kaltura bought TVinci to expand its SVOD offering to live and linear programming. Arris bought SeaWell Networks for its advertising insertion and packaging at the edge technology. SeaWell Networks’ strong adaptive bit rate streaming skill set will be invaluable to expand the company’s multiscreen strategy.

That’s all folks for this quarter! I will keep all the good net neutrality commentary for next month, hopefully when the smoke dissipates from the PR battlefield.


Thursday, February 6, 2014

The case for sponsored video


AT&T Sponsored Data We have all seen the announcement  at CES this January. AT&T is to offer a new plan for its 4G customers, allowing companies to sponsor traffic from specific app or content. The result would be that subscribers would not be charged for data traffic resulting from these interactions, the sponsoring company picking up the bill.
While there is not much detail available on how the offer works and what price would the sponsor be expected to pay for the sponsored content (after all, subscribers all have very different plans, with different charging / accrual models), there has already been much speculation and comments in the press and analyst community about the idea.
I haven't really read anything yet to convince me whether this is a good or bad idea, so I thought I would offer my 2 cents.

Costs are rising, ARPU is declining 

Ralph de la Vega, AT&T's CEO was quoted commenting on the press release that AT&T has seen a 30,000% growth in mobile data in the last 6 years. This growth in traffic resulted in an increase in costs, paving the way for the license bid and roll out of LTE. US ARPU are declining for the first time in history, and with rising costs, network operators must find new revenue streams. Since video now accounts for over 50% of data traffic and growing, it is a good place to start looking.

Mobile advertising is under utilized, but there is appetite

According to KPCB, about 41B$ were mis spent by advertisers in the US alone, on old media (print, radio) if we compare to time spent on new media (internet, mobile). The Internet Advertising Bureau 2013 study (people were interviewed in Australia, China, Italy, South Korea, Brazil, UK, India, Russia, Turkey, the US) shows that a large proportion of users are "ok with advertising if [they] can access content for free". The same study shows that announcers are looking at targeting (45%) and reach (30%) as the most important criteria to select a medium for advertisement. At last, video pre-roll seems to be the preferred format for advertising on tablet and smartphones.

Network operators are not (well) organized to sell advertising

Barring a few exceptions, network operators do not have the means to sell sponsored data efficiently. The technology aspect is sketchy. Isolating specific data traffic from their context can be difficult (think facebook app with a youtube embedded video served by a CDN) and content / app providers do not design their service with network friendliness in mind. On the business front, the challenges are, I believe, bigger. Network operators have failed repetitively in coopetition models. They do not have a wholesale division and mindset (everyone is scared of being only a pipe). On the bright side, Verizon, Vodafone, AT&T are putting forward APIs to start enabling content providers to have more visibility and varying level of control on the user experience.

Regulatory forces are not mature for this model

We have seen the latest net neutrality comments and fear flaring on media. Sponsored data and/or video is going to have to be managed properly if AT&T actually wants to make it a business. I am very skeptical with AT&T's statement "Sponsored Data will be delivered at the same speed and performance as any non-Sponsored Data content." I doubt that best effort will be sufficient, when / if advertisers are ready to put real money on the table. They will need guarantees, service level agreements, analytics to prove that the ad was served until completion in a good enough quality.

In conclusion, sponsored data is going to be difficult to put in place, but there is an appetite for it. Technically, it would be easier and probably more beneficial to limit the experience to video only. Culturally and business-wise, operators need to move in this direction, if they want to compete against companies for whose advertising is the dominant model (Google, Facebook, Linked In...). In order to do so, separating video from general data traffic and managing it as a separate service can go a long way. The biggest challenge will remain. It is one of mindset and organization. I am not sure that sending an email to sponsoreddata@att.com is going to get McDonalds to pay for my 30 minutes of YouTube if I buy a Big Mac combo.


Tuesday, October 25, 2011

Cisco to deliver "Wireless TV" to AT&T

In a press release dated Oct. 25, Cisco announced the newest addition to Videoscape product line:
the wireless TV solution, composed of a Cisco access point and wi-fi enabled receivers.

The solution is being rolled out first at AT&T Uverse customers and allows basically a centralized HD DVR operation distributed wirelessly to as many receivers and TVs in the house.

This launch is advertised as the industry's first wireless IPTV deployment. While the innovation is minor (adding wifi to a DVR is hardly exceptional), the in-home close network deployed by Cisco with the dedicated access point can have interesting developments.

Surely, as connected TVs start appearing and accessing OTT content, whoever is going to control the home gateway, including the wifi access is going to be able to manage and in some cases enforce access within a walled garden.

I might be cynical here, but i would not be surprised if the access point and the home network delivered by AT&T was somewhat restrictive in term of the content it delivers. It is probably dedicated only to the broadcasting managed services offered by AT&T and does not offer OTT access.

In that case, it would mean another box to manage in your home (the access point), and potentially interesting issues when it comes to you selecting which wifi network to connect to with your connected TV, Bluray, net box or unmanaged DVR.
It will be interesting to see how this new offering fares with AT&T customers.