Showing posts with label interoperability. Show all posts
Showing posts with label interoperability. Show all posts

Monday, May 29, 2023

The RICs - brothers from a different mother?

As you might know, if you have been an erstwhile reader of my previous blogs and posts, I have been a spectator, advocate and sometime actor in telecoms open and disaggregated networks for quite some time.

From my studies on SDN /NFV, to my time with Telefonica, the TIP forum, the ONF or more recently my forays into Open RAN at NEC, I have been trying to understand whether telecom networks could, by adopting cloud technologies and concepts, evolve towards more open and developer friendly entities.

Unfortunately, as you know, we love our acronyms in telecom. It is a way for us to reconcile obtuse technology names into even more impenetrable concepts that only "specialists" understand. So, when you come across acronyms that contain other acronyms, you know that you are dealing with extra special technology.

Today, let's spend some time on the RICs. RIC stands for RAN (Radio Access Network) Intelligent Controller. The RICs are elements of open RAN, as specified by the O-RAN alliance. The O-RAN alliance is a community of telecoms actors looking at opening and disaggregating the RAN architecture, one of the last bastions of monolithic, proprietary naughtiness in telecoms networks. If you want to understand more about why O-RAN was created, please read here.  There are two types of RIC, a non real time and a near real time.

O-RAN logical architecture

The RICs are frameworks with standardized interfaces to each others, and the other elements of the SMO,  the cloud and the RAN. They are supposed to be platforms onto which independent developers would be able to develop RAN specific features, packaged as Apps that would theoretically be deployable on any standard compliant RIC. The Apps are called rAPPs for non RT and xApps for near RT RIC.
Although the RICs share the same last name, they are actually quite different and are more distant cousins than siblings, which makes the rApps and xApps unlikely to be compatible in a multivendor environment.

The non real time RIC and the rApps

The non RT RIC is actually not part of the RAN itself, it is part of the Service Management and Orchestration (SMO) framework. The non-real time aspect means that it is not intended to act on the O-RAN subsystems (Near RT RIC, Centralized Units (CU), Distributed Units (DU), Radio Units (RU)) with a frequency much lower than once per second. Indeed, the non RT RIC can interface with these systems even in the range of minutes or days. 

Its purpose is a combination of an evolution of SON (Self Organizing Networks) and the creation of a vendor-agnostic RAN OSS. SON was a great idea initially. It was intended to provide operators with a means to automate and optimize RAN configuration. Its challenges raised from the difficulty to integrate and harmonize rules across vendors. One of O-RAN's tenets is to promote multi vendor ecosystems, thereby providing a framework for RAN automation. As part of the SMO, the non RT RIC is also an evolution of the proprietary OSS and Element Management Systems, which for the first time provide open interfaces for RAN configuration.

Because of its dual legacy from OSS and SON, and its less stringent integration needs, the non RT RIC has been the first entity to see many new entrant vendors, either from the OSS or the SON or the transport or the cloud infrastructure management communities. 

Because of their non real time nature (and because many non RT RIC and rApps vendors are different from the RAN vendors, the rApps have somewhat limited capabilities in multivendor environments. Most vendors provide visualization / topology / dashboards capabilities and enhancements revolving around neighbouring and handover management.

The near real time RIC and the xApps

The near real time RIC is part of the RAN and comprises a set of functionalities that are, in a traditional RAN implementation part of the feature set of the gNodeB base station. O-RAN has separated these capabilities into a framework, exposing open interfaces to the RAN components, theoretically allowing vendor agnostic RAN automation and optimization. The near real-time would infer sub-second frequency of interaction between the elements. And..here's the rub.

Millisecond adjustments are necessary in the RAN to account for modulation, atmospheric or interference conditions. This frequency requires a high level of integration between the CU, DU and RU to not lose performance. As often in telecoms,  the issue is not with the technology, but rather the business model. O-RAN's objective is to commoditize and disrupt the RAN, which is an interesting proposition for its consumers (the operators) and for new entrants, less so for legacy vendors. The disaggregation of the RAN with the creation of near RT RIC and xApps goes one step further, commoditizing the RU, CU and DU and extracting the algorithmic value and differentiation in the xApps. The problem with disruption is that it only works in mature, entrenched market segments. While traditional RAN might be mature enough for disruption, it is uncertain that open RAN itself has the degree of maturity whereas new entrants in RU, CU and DU would be commoditized by xApps.

For this reason, it is likely that if near RT RIC and xApps are to be successful, only the dominant RU, CU, DU vendors will be able to develop it and deploy it, which will lead to some serious dependencies against vendor independence.

 I am currently working on my next report on Open RAN and RIC and will provide more updates as I progress there.




Wednesday, January 8, 2020

Open or open source?

For those who know me, you know that I have been a firm supporter of openness by design for a long time. It is important not to conflate openness and open source when it comes to telco strategy, though.

Most network operators believe that any iteration of their network elements must be fully interoperable within their internal ecosystem (their network) and their external ecosystem (other telco networks). This is fundamentally what allows any phone user to roam and use any mobile networks around the planet.
This need for interoperability has reinforced the importance of standards such as ETSI and 3GPP and forums such as GSMA over the last 20 years. This interoperability by design has led to the creation of rigid interfaces, protocols and datagrams that preside over how network elements should integrate and interface in a telco and IP network.
While this model has worked well for the purpose of creating a unified global aggregation of networks with 3G/4G, departing from the fragmentation of 2G (GSM, CDMA, TDMA, AMPS...), it has also somewhat slowed down and stifled the pace of innovation for network functions.

The last few years have seen an explosion of innovation in networks, stemming from the emergence of data centers, clouds, SDN and virtualization. The benefits have been incredible, ranging from departing from proprietary hardware dependency, increased multi tenancy, resource elasticity, traffic programmability, automation and ultimately the atomization of network functions into microservices. This allowed the creation of higher level network abstractions without the need for low level programming or coding (for more on this, read anything ever written by the excellent Simon Wardley). These benefits have been systematically developed and enjoyed by those companies that needed to scale their networks the fastest: the webscalers.

In the process, as the technologies underlying these new networks passed from prototype, to product, to service, to microservice, they have become commoditized. Many of these technologies, once close to maturity, have been open sourced, allowing a community of similarly interested developers to flourish and develop new products and services.

Telecom operators were inspired by this movement and decided that they needed as well to evolve their networks to something more akin to an elastic cloud in order to decorrelate traffic growth from cost. Unfortunately, the desire for interoperability and the lack of engineering development resources led operators to try to influence and drive the development of a telco open source ecosystem without really participating in it. NFV (Networks function Virtualization) and telco Openstack are good examples of great ideas with poor results.Let's examine why:

NFV was an attempt to separate hardware from software, and stimulate a new ecosystem of vendors to develop telco functions in a more digital fashion. Unfortunately, the design of NFV was a quasi literal transposition of appliance functions, with little influence from SDN or micro service architecture. More importantly, it relied on an orchestration function that was going to become the "app store" of the network. This orchestrator, to be really vendor agnostic would have to be fully interoperable with all vendors adhering to the standard and preferably expose open interfaces to allow interchangeability of the network functions, and orchestrator vendors. In practice, none of the traditional telecom equipment manufacturers had plans to integrate with a third party orchestrators and would try to deploy their own as a condition for deploying their network functions. Correctly identifying the strategic risk, the community of operators started two competing open source projects: Open Source Mano (OSM) and Open Network Automation Platform (ONAP).
Without entering into the technical details, both projects suffered at varying degree from a cardinal sin. Open source development is not a spectator sport. You do not create an ecosystem or a community of developer. You do not demand contribution, you earn it. The only way open source projects are successful is if their main sponsors actively contribute (code, not diagrams or specs) and if the code goes into production and its benefits are easily illustrated. In both cases, most operators have opted on relying heavily on third party to develop what they envisioned, with insufficient real life experience to ensure the results were up to the task. Only those who roll their sleeves and develop really benefit from the projects.

Openstack was, in comparison, already a successful ecosystem and open source development forum when telco operators tried to bend it to their purpose. It had been deployed in many industries, ranging from banking, insurances, transportation, manufacturing, etc... and had a large developer community. Operators thought that piggybacking on this community would accelerate development and of an OpenStack suited for telco operation. The first efforts were to introduce traditional telco requirements (high availability, geo redundancy, granular scalability...) into a model that was fundamentally a best effort IT cloud infrastructure management. As I wrote 6 years ago, OpenStack at that stage was ill-suited for the telco environment. And it remained so. Operators resisted hiring engineers and coding sufficient functions into OpenStack to make it telco grade, instead relying on their traditional telco vendors to do the heavy lifting for them.

The lessons here are simple.
If you want to build a network that is open by design, to ensure vendor independence, you need to manage the control layer yourself. In all likeliness, tring to specify it and asking others to build it for you will fail if you've never built one before yourself.
Open source can be a good starting point, if you want to iterate and learn fast, prototype and test, get smart enough to know what is mature, what can should be bought, what should be developed and where is differential value. Don't expect open source to be a means for others to do your labour. The only way you get more out of open source than you put in is a long term investment with real contribution, not just guidance and governance.

Wednesday, January 25, 2017

World's first ETSI NFV Plugfest

As all know in the telecom industry, the transition from standard to implementation can be painful, as vendors and operators translate technical requirements and specifications into code. There are always room for interpretation and desires to innovate or differentiate that can lead to integration issues. Open source initiatives have been able to provide viable source code for implementation of elements and interfaces and they are a great starting point. The specific vendors and operators’ implementations still need to be validated and it is necessary to test that integration needs are minimal.

Networks Function Virtualization (NFV) is an ETSI standard that is a crucial element of telecom networks evolution as operators are looking at their necessary transformation to accommodate the hyper growth resulting from video services moving to online and mobile.

As a member of the organization’s steering committee, I am happy to announce that the 5G open lab 5Tonic will be hosting the world’s first ETSI NFV plugfest from January 23 to February 3, 2017 with the technical support of Telefonica and IMDEA Networks Institute.  

5Tonic is opening its doors to the NFV community, comprising network operators, vendors and open source collaboration initiatives to assert and compare their implementations of Virtual Network Functions (VNFs), NFV Infrastructure and Virtual Infrastructure Manager. Additionally, implementations of Management and Orchestrations (MANO) functions will also be available.

43 companies and organizations have registered to make this event the largest in NFV interoperability in the world.

Companies:
•           Telefonica
•           A10
•           Cisco
•           Canonical
•           EANTC
•           EHU
•           Ensemble
•           Ericsson
•           F5
•           Fortinet
•           Fraunhofer
•           HPE
•           Huawei
•           Inritsu
•           Intel
•           Italtel
•           Ixia
•           Keynetic
•           Lenovo
•           Mahindra
•           Openet
•           Palo Alto
•           Radware
•           RIFT.io
•           Sandvine
•           Sonus
•           Spirent
•           RedHat
•           VMWare
•           WIND

Open source projects:
•           OSM (Open Source MANO)
•           Open Baton
•           Open-O
•           OPNFV

 OSM is delivering an open source MANO stack aligned with ETSI NFV Information Models. As an operator-led community, OSM is offering a production-quality open source MANO stack that meets the requirements of commercial NFV networks.

Testing will take place on site at the 5TONIC lab near Madrid, as well as virtually for remote participants.


Thursday, January 22, 2015

The future is cloudy: NFV 2020

As the first phase of ETSI ISG NFV wraps up and phase 1's documents are being released, it is a good time to take stock of the progress to date and what lies ahead.

ETSI members have set an ambitious agenda to create a function and service virtualization strategy for broadband networks, aiming at reducing hardware and vendor dependency while creating an organic, automated, programmable network.

The first set of documents approved and published represents a great progress and possibly one of the fastest achievement for a new standard to be rolled out; in only two years. It also highlights how much work is still necessary to make the vision a reality.

Vendors announcements are everywhere, "NFV is a reality, it is happening, it works, you can deploy it in your networks today...". I have no doubt Mobile World Congress will see several "world's first commercial deployment of [insert your vLegacyProduct here]...". The reality is a little more nuanced.

Network Function Virtualization, as a standard does not allow today a commercial deployment out of the box. There are too many ill-defined interfaces, competing protocols, missing API to make it plug and play. The only viable deployment scenario today is from single vendor or tightly integrated (proprietary) dual vendor strategies for silo services / functions. From relatively simple (Customer Premise Equipment) to very complex (Evolved Packet Core), it will possible to see commercial deployments in 2015, but they will not be able to illustrate all the benefits of NFV.

As I mentioned before, orchestration, integration with SDN, performance, security, testing, governance... are some of the challenges that remain today for viable commercial deployment of NFV in wireless networks. These are only the technological challenges, but as mentioned before, operational challenges to evolve and train the workforce at operators is probably the largest challenge.

From my many interactions and interviews with network operators, it is clear that there are several different strategies at play.

  1. The first strategy is to roll out a virtualized function / service with one vendor, after having tested, integrated, trialed it. It is a strategy that we are seeing a lot in Japan or Korea, for instance. It provides a pragmatic learning process towards implementing virtualized function in commercial networks, recognizing that standards and vendors implementations will not be fully interoperable before a few years.
  2. The second strategy is to stimulate the industry by standards and forum participation, proof of concepts, and even homegrown development. This strategy is more time and resource-intensive but leads to the creation of an ecosystem. No big bang, but an evolutionary, organic roadmap that picks and chooses which vendor, network element, services are ready for trial, poc, limited and commercial deployment. The likes of Telefonica and Deutsche Telekom are good examples of this approach.
  3. The third strategy is to define very specifically the functions that should be virtualized, their deployment, management and maintenance model and select a few vendors to enact this vision. AT&T is a good illustration here. The advantage is probably to have a tailored experience that meets their specific needs in a timely fashion before standards completion, the drawback being the flexibility as vendors are not interchangeable and integration is somewhat proprietary.
  4. The last strategy is not a strategy, it is more a wait and see approach. Many operators do not have the resource or the budget to lead or manage this complex network and business transformation. they are observing the progress and placing bets in term of what can be deployed when.
As it stands, I will continue monitoring and chairing many of the SDN / NFV shows this year. My report on SDN / NFV in wireless networks is changing fast, as the industry is, so look out for updates throughout 2015.

Wednesday, January 14, 2015

2014 review and 2015 predictions

Last year, around this time, I had made some predictions for 2014. Let's have a look at how I fared and I'll risk some opinions for 2015.
Before predictions, though, new year, new web site, check it out at coreanalysis.ca

Content providers, creators, aggregators:

"OTT video content providers are reaching a stage of maturity where content creation / acquisition was the key in the first phase, followed by subscriber acquisition. As they reach critical mass, the game will change and they will need to simultaneously maximize monetization options by segmenting their user base into new price plans and find a way to unlock value in the mobile market." 
On that front, content creation / acquisition still remains a key focus of large video OTT (See Netflix' launch of Marco Polo for $90m). Netflix has reported  $8.9B of content obligations as of September 2014. On the monetization, front, we have also seen signs of maturity, with YouTube experimenting on new premium channels and Netflix charging premium for 4K streaming. HBO has started to break out of its payTV shell and has signed deals to be delivered as online broadband only subscriptions, without cable/satellite.
Netflix has signed a variety of deals with european MSOs and broadband operators as they launched there in 2014.
While many OTT, particularly social networks and radio/ audio streaming have collaborated and signed deals with mobile network operators, we are seeing also a tendency to increasingly encrypt and obfuscate online services to avoid network operators meddling in content delivery.
Both trends will likely accelerate in 2015, with more deals being struck between OTT and network operators for subscription-based zero-rated data services. We will also see in mobile networks the proportion of encrypted data traffic raise from the low 10's to at least 30% of the overall traffic.

Wholesaler or Value provider?


The discussion about the place of the network operator and MSO in content and service delivery is still very much active. We have seen, late last year, the latest net neutrality sword rattling from network operators and OTT alike, with even politicians entering the fray and trying to influence the regulatory debates. This will likely not be setted in 2015. As a result, we will see both more cooperation and more competition, with integrated offering (OTT could go full MVNO soon) and encrypted, obfuscated traffic on the rise. We will probably also see the first lawsuits from OTT to carriers with respect to traffic mediation, optimization and management. This adversarial climate will delay further monetization plays relying on mobile advertisement. Only integrated offering between OTT and carriers will be able to avail from this revenue source.
Some operators will step away from the value provider strategy and will embrace wholesale models, trying to sign as many MVNO and OTT as possible, focusing on network excellence. These strategies will fail as the price per byte will decline inexorably, unable to sustain a business model where more capacity requires more investment for diminishing returns.
Some operators will seek to actively manage and mediate the traffic transiting through their networks and will implement HTTPS / SPDY proxy to decrypt and optimize encrypted traffic, wherever legislation is more supple.

Mobile Networks

CAPEX will be on the rise overall with heterogeneous networks and LTE roll-out taking the lion share of investments. 
LTE networks will show signs of weakness in term of peak traffic handling mainly due to video and audio streaming and some networks will accelerate LTE-A investments or aggressively curb traffic through data caps, throttles and onerous pricing strategies.

SDN will continue its progress as a back-office and lab technology in mobile networks but its incapacity to provide reliable, secure, scalable and manageable network capability will prevent it to make a strong commercial debut in wireless networks. 2018 is the likeliest time frame.

NFV will show strong progress and first commercial deployments in wireless networks, but in vertical, proprietary fashion, with legacy functions (DPI, EPC, IMS...) translated in a virtualized environment in a mono vendor approach. We will see also micro deployments in emerging markets where cost of ownership takes precedence over performance or reliability. APAC will also see some commercial deployments in large networks (Japan, Korea) in fairly proprietary implementations.
Orchestration and integration with SDN will be the key investments in the standardization community. The timeframe for mass market interoperable multi vendor commercial deployment is likely 2020.

To conclude this post, my last prediction is that someone will likely be bludgeoned to death with their own selfie stick, I'll put my money on Mobile World Congress 2015 as a likely venue, where I am sure countless companies will give them away, to the collective exasperation and eye-rolling of the Barcelona population.

That's all folks, see you soon at one of the 2015 shows.

Monday, October 20, 2014

Report from SDN / NFV shows part I

Wow! last week was a busy week for everything SDN / NFV, particularly in wireless. My in-depth analysis of the segment is captured in my report. Here are a few thoughts on the last news.

First, as is now almost traditional, a third white paper was released by network operators on Network Functions Virtualizations. Notably, the original group of 13 who co-wrote the first manifesto that spurred the creation of ETSI ISG NFV has now grown to 30. The Industry Specification Group now counts 235 companies (including yours truly) and has seen 25 Proof of Concepts initiated. In short the white paper announces another 2 year term of effort beyond the initial timeframe. This new phase will focus on multi-vendor orchestration operability, and integration with legacy OSS/BSS functions.

MANO (orchestration) remains a point of contention and many start to recognise the growing threat and opportunity the function represents. Some operators (like Telefonica) seem actually to have reached the same conclusions as I in this blog and are starting to look deeply into what implementing MANO means for the ecosystem.

I will go today a step further. I believe that MANO in NFV has the potential to evolve the same way as the app stores in wireless. It is probably an apt comparison. Both are used to safekeep, reference, inventory, manage the propagation and lifecycle of software instances.

In both cases, the referencing of the apps/VNF  is a manual process, with arbitrary rules that can lead to dominant position if not caught early. It would be relatively easy, in this nascent market to have an orchestrator integrate as many VNFs as possible, with some "extensions" to lock-in this segment like Apple and Google did with mobiles.

I know, "Open" is the new "Organic", but for me, there is a clear need to maybe create an open source MANO project, lets call it "OpenHand"?

You can view below a mash-up of the presentations I gave at the show last week and the SDN & NFV USA in Dallas the week before below.



More notes on these past few weeks soon. Stay tuned.

Tuesday, August 26, 2014

SDN / NFV part V: flexibility or performance?


Early on in my investigations of how SDN and NFV are being implemented in mobile networks, I have found that performance remains one of the largest stumbling blocks the industry has to overcome if we want to transition to next generations networks.

Specifically, many vendors recognize behind closed doors that a virtualized environment today has many performance challenges. It explains probably why so many of the PoCs feature chipset vendors as a participant. A silicon vendor as a main proponent of virtualization is logical, as the industry seeks to transition from purpose-built proprietary hardware to open COTS platforms. It does not fully explain though the heavy involvement of the chipset vendors in these PoCs. Surely, if the technology is interoperable and open, chipset vendor integration would not be necessary?

Linux limitations

Linux as an operating system has been developed originally for single core systems. As multi-core and multithreaded architectures made their appearance, the operating system has shown great limitations in managing particularly demanding data planes applications. When one looks at virtualized network function, one has to contend with the host OS and the guest OS. 
In both cases, a major loss of performance is observed at enter and exits of a VM and the OS. These software interrupt are necessary to pull the packets from the data plane to the application layer so that they can be processed. The cost of software interrupt for kernel Linux access ends up being prohibitive and create bottlenecks and race conditions as the traffic increases and more thread are involved. Specifically, every time the application needs to access the Linux kernel, it must pause the VM, save its context and stall the application to access the kernel. For instance, a base station can have over 100k software interrupt per second.

Intel DPDK and SR-IOV

Intel Data Plane Development Kit (DPDK) developed by 6Wind is used for I/O and packet forwarding functions. A “fast path” is created between the VM and the virtual network interface card (NIC) that improves the data path processing performance. This implementation allows to effectively bypass the guest hypervisor and to provide fast processing of packets between the VM and the host.
At the host level, Single Root I/O Virtualization (SR-IOV) is also used in conjunction with DPDK to provide NIC-to-VM connectivity, bypassing the Linux host and improving packet forwarding performance. The trade off there is that each VM on SR-IOV requires to be tied to a physical network card.

Performance or Flexibility?

The implementation of DPDK and SR-IOV has a cost. While performance for VNFs implementing both techniques show results close to physical appliance, the trade off is flexibility. In implementing these mechanisms, the VMs are effectively bound to the physical hardware resource they depend on. A perfect configuration and complete identical replication of every element, at the software and physical level is necessary for migration and scaling out. While Intel is working on a virtual DPDK integrated in the hypervisor, implementation of SDN / NFV in wireless networks for data plane hungry network functions will force vendors and networks in the short to medium term to choose between performance or flexibility.

More content available here.

Wednesday, April 11, 2012

Policy driven optimization

The video optimization market is still young, but with over 80 mobile networks deployed globally, I am officially transitioning it from emerging to growth phase in the technology life cycle matrix.


Mobile world congress brought many news in that segment, from new entrants, to networks announcements, technology launches and new partnerships. I think one of the most interesting trend is in the policy and charging management for video.


Operators understand that charging models based on pure data consumption are doomed to be hard to understand for users and to be potentially either extremely inefficient or expensive. In a world where a new iPad can consume a subscriber's data plan in a matter of hours, while the same subscriber could be watching 4 to 8 times the same amount of video on a different device, the one-size-fits-all data plan is a dangerous proposition.


While the tool set to address the issue is essentially in place, with intelligent GGSNs, EPCs, DPIs, PCRFs and video delivery and optimization engine, this collection of devices were mostly managing their portion of traffic in a very disorganized fashion. Access control at the radio and transport layer segregated from protocol and application, accounting separated from authorization and charging...
Policy control is the technology designed to unify them and since this market's inception, has been doing a good job of coordinating access control, accounting, charging, rating and permissions management for voice and data.


What about video?
The diameter Gx interface is extensible, as a semantics to convey traffic observations and decisions between one or several policy decision points and policy enforcement points. The standards allows for complex iterative challenges between end points to ascertain a session's user, its permissions and balance as he uses cellular services. 
Video was not a dominant part of the traffic when the policy frameworks were put in place, and not surprisingly, the first generation PCRFs and video optimization deployments were completely independent. Rules had to be provisioned and maintained in separate systems, because the PCRF was not video aware and the video optimization platforms were not policy aware.
This led to many issues, ranging from poor experience (DPI instructed to throttle traffic below the encoding rate of a video), bill shock (ill-informed users blow past their data allowance) to revenue leakage (poorly designed charging models not able to segregate the different HTTP traffic).


The next generation networks see a much tighter integration between policy decision and policy enforcement for the delivery of video in mobile networks. Many vendors in both segments collaborate and have moved past the pure interoperability testing to deployments in commercial networks. Unfortunately, we have not seen many proof points of these integration yet. Mostly, it is due to the fact that this is an emerging area. Operators are still trying to find the right recipe for video charging. Standards do not offer guidance for specific video-related policies. Vendors have to rely on two-ways (proprietary?) implementations.


Lately, we have seen the leaders in policy management  and video optimization collaborate much closer to offer solutions in this space. In some cases, as the result of being deployed in the same networks and being "forced" to integrate gracefully, in many cases, because the market enters a new stage of maturation. As you well know, I have been advocating a closer collaboration between DPI, policy management and video optimization for a while (here, here and here for instance). I think these are signs of market maturation that will accelerate concentration in that space. There are more and more rumors of  video optimization vendors getting closer to mature policy vendors. It is a logical conclusion for operators to get a better integrated traffic management and charging management ecosystem centered around video going forward. I am looking forward to discussing these topics and more at Policy Control 2012 in Amsterdam, April 24-25.

Wednesday, January 11, 2012

For or against Adaptive Bit Rate? part III: Why isn't ABR more successful?

So why isn't ABR more successful? As we have seen here and here, there are many pros for the technology. It is a simple, efficient means to reduce the load on networks, while optimizing the quality of experience and reducing costs.

Lets review the problems experienced by ABR that hinder its penetration in the market.

1. Interoperability
Ostensibly, having three giants such as Apple, Adobe and Microsoft each pushing their version of the implementation leads to obvious issues. First, the implementations by the three vendors are not interoperable. That's one of the reason why your iPad wont play flash videos.Not only the encoding of the file is different (fMP4 vs. multiplexed), but the protocol (MPEG2TS vs. HTTP progressive download) and even the manifest are proprietary.This leads to a market fragmentation that forces content providers to choose their camp or implement all technologies, which drives up the cost of maintenance and operation proportionally.MPEG DASH, a new initiative aimed at rationalizing ABR use across the different platforms was just approved last month. The idea is that all HTTP based ABR technologies will converge towards a single format, protocol and manifest.

2. Economics
Apple, Adobe and Microsoft seek to control the content owner and production by enforcing their own formats and encoding. I don't see them converge for the sake of coopetition in the short term. A good example is Google's foray into WebM and its ambitions for YouTube.

4. Content owners' knowledge of mobile networks
Adaptive bit rate puts the onus on content owners to decide which flavour of the technology they want to implement, together with the range of quality they want to enable. In last week's example, we have seen how 1 file can translate into 18 versions and thousand of fragments to manage.Obviously, not every content provider is going to go the costly route of transcoding and managing 18 versions of the same content, particularly if this content is user-generated or free to air. This leaves the content provider with the difficult situation to select how many versions of the content and how many quality levels to be supported.
As we have seen over the last year, the market changes at a very rapid pace in term of which vendors are dominant in smartphone and tablets. It is a headache for a content provider to foresee which devices will access their content. This is compounded by the fact that most content providers have no idea of what the effective delivery bit rates can be for EDGE, UMTS, HSPA, HSPA +, LTE In this situation, the available encoding rate can be inappropriate for the delivery capacity.


In the example above, although the content is delivered through ABR, the content playback will be impacted as the delivery bit rate crosses the threshold of the lowest available encoding bit rate. This results in a bad user experience, ranging from buffering to interruption of the video playback.

5. Tablet and smartphone manufacturers knowledge of mobile networks
Obviously, delegating the selection of the quality of the content to the device is a smart move. Since the content is played on the device, this is where there is the clearest understanding of instantaneous network capacity or congestion. Unfortunately, certain handset vendors, particularly those coming from the consumer electronics world do not have enough experience in wireless IP for efficient video delivery. Some devices for instance will go and grab the highest capacity available on the network, irrespective of the encoding of the video requested. So, for instance if the capacity at connection is 1Mbps and the video is encoded at 500kbps, it will be downloaded at twice its rate. That is not a problem when the network is available, but as congestion creeps in, this behaviour snowballs and compounds congestion in embattled networks.

As we can see, there are  still many obstacles to overcome for ABR to be a successful mass market implementation. My next post will show what alternatives exist to ABR in mobile networks for efficient video delivery.

Tuesday, April 26, 2011

LTE: it's a Little Too Early part 1

As we are starting to see the first announcements about LTE deployments and their promises of speed comparable only to those experienced on the Autobahn, I wanted to dig a little bit into the maturity of the technology and its mass market time frame for applicability.
My conclusion? It's still a Little Too Early for LTE .

According to GSA(the Global mobile Suppliers Association), the current number of commitments from network operators to implement LTE is 140, as of March 24th2011. GSA is a vendors-led association, so the forecast are somewhat optimistic, but this is a respectable number of operators who have bravely started to invest in LTE.


When I traditionally look at the penetration capability of a new technology in the wireless ecosystem, I usually rely on a few indicators to gauge where the technology is in its adoption cycle.

  • Mass market penetration: I would look for at least 30% of a given population having implemented the technology and having devices in their hands capable of using it. That could mean 30% of network operators in the case of LTE, but more importantly, I would look at 30% of the subscribers of a given operator having an LTE subscription as a sign of maturity.
  • Ease of use, ease of adoption: In this case, I look at what the barriers to entry are for subscribers or operators to acquire and use the technology. The cost of license auctions, the relative cost of increasing HSPA+ network density and investments. What will Femtocell or data offload do to the demand? 
  • Interoperability: This is a key criterion, that is overlooked time and again in wireless technology introduction. This is about interoperability between the devices themselves, the devices and the network, backwards compatibility, the networks between themselves, roaming, interconnectivity, etc... This has been a consistent issue that has plagued the adoption and success of many wireless technologies over the last decades (WAP, MMS, PoC, IMS...).

In my next post, I  will provide my opinion on the challenges of LTE and the time frame for its mass market adoption.