Showing posts with label QoS. Show all posts
Showing posts with label QoS. Show all posts

Thursday, September 14, 2023

O-RAN alliance rApps and xApps typology

 An extract from the Open RAN RIC and Apps report and workshop.

1.    O-RAN defined rApps and xApps

1.1.        Traffic steering rApp and xApp

Traditional RAN provides few mechanisms to load balance and force traffic on specific radio paths. Most deployments see overlaps of coverage between different cells in the same spectrum, as well as other spectrum layered in, allowing performance, coverage, density and latency scenarios to coexist. The methods by which a UE is connected to a specific cell and a specific channel are mostly static, based on location of the UE, signal strength, service profile and the parameters to handover a connection from one cell to another, or within the same cell from one bearer to another or from one sector to another. The implementation is vendor specific and statically configured.



Figure 9: Overlapping cells and traffic steering

Non-RT RIC and rApps offer the possibility to change these handover and assignments programmatically and dynamically, taking advantage of policies that can be varied (power optimization, quality optimization, performance or coverage optimization…) and that can change over time. Additionally, the use of AI/ML technology can provide predictive input capability for the selection or creation of policies allowing a preferable outcome.

The traffic steering rApp is a means to design and select traffic profile policies and to dynamically allow the operator to instantiate these policies, either per cell, per sector, per bearer or even per UE or per type of service. The SMO or the Non-RT RIC collect RAN data on traffic, bearer, cell, load, etc. from the E2 nodes and instruct the Near-RT RIC to enforce a set of policies through the established parameters.

1.2.       QoE rApp and xApp

This rApp is assuming that specific services such as AR/VR will require different QoE parameters that will need to be adapted in a semi dynamic fashion. It proposes the use of AI/ML for prediction of traffic load and QoE conditions to optimize the traffic profiles.

UE and network performance data transit from the RAN to the SMO layer over the O1 interface, QoE AI/ML models are trained, process the data and infer the state and predict its evolution over time, the rApp transmits QoE policy directives to the Near-RT RIC via the Non-RT RIC.

1.3.       QoS based resource optimization rApp and xApp

QoS based resource optimization rApp is an implementation of network slicing optimization for the RAN. Specifically, it enables the Non-RT RIC to guide the Near-RT RIC in the allocation of Physical Resource Blocks to a specific slice or sub slice, should the Slice Level Specification not be satisfied by the static slice provisioning.

1.4.       Context-based dynamic handover management for V2X rApp and xApp

Since mobile networks have been designed for mobile but relatively low velocity users, the provision of high speed, reliable mobile service along highways requires specific designs and configurations. As vehicles become increasingly connected to the mobile network and might rely on network infrastructure for a variety of uses, Vehicle to infrastructure (V2X) use cases are starting to appear primarily as research and science projects. In this case, the App is supposed to use AI/ML models to predict whether a UE is part of a V2X category and its trajectory in order to facilitate cell handover along its path.

1.5.       RAN Slice Assurance rApp and xApp

3GPP has defined the concept of creating a connectivity product with specific attributes (throughput, reliability, latency, energy consumption) applicable to specific devices, geographies, enterprises… as slices. In an O-RAN context, the Non-RT RIC and Near-RT RIC can provide optimization strategies for network slicing. In both cases, the elements can monitor the performance of the slice and perform large or small interval adjustments to stay close the slice’s Service Level Agreement (SLA) targets.

Generally speaking, these apps facilitate the allocation of resource according to slice requirements and their dynamic optimization over time.

1.6.       Network Slice Instance Resource Optimization rApp

The NSSI rApp aims to use AI/ML to model traffic patterns of a cell through historical data analysis. The model is then used to predict network load and conditions for specific slices and to dynamically and proactively adjust resource allocation per slice.

1.7.       Massive MIMO Optimization rApps and xApps

Massive MIMO (mMIMO) is a key technology to increase performance in 5G. It uses complex algorithms to create signal beams which minimize signal interference and provide narrow transmission channels. This technology, called beamforming can be configured to provide variations in the vertical and horizontal axis, azimuth and elevation resulting in beams of different shapes and performance profiles. Beamforming and massive MIMO are a characteristic of the Radio Unit, where the DU provides the necessary data for the configuration and direction of the beams.

In many cases, when separate cells overlap a given geography, for coverage or density with either multiple macro cells or macro and small cells mix, the mMIMO beams are usually configured statically, manually based on the cell situation. As traffic patterns, urban environment and interference / reflection, change, it is not rare that the configured beams lose efficiency over time.

In this instance, the rApp collects statistical and measurement data of the RAN to inform a predictive model of traffic patterns. This model, in turn informs a grid of beams that can be applied to a given situation. This grid of beams is transmitted to the DU through the Near-RT RIC and a corresponding xApp, responsible for assigning the specific PRB and beam parameters to the RU. A variant of this implementation does not require grid of beams or AI/ML, bit a list of statically configured beams that can be selected based on specific threshold or RAN measurements.

Additional apps leveraging unique MIMO features such as downlink transmit power, Multiple User MIMO and Single User MIMO allow, by reading UE performance to adjust the transmit power or the beam parameters to improve the user experience or the overall spectral efficiency.

1.8.       Network energy saving rApps and xApps

These apps are a collection of methods to optimize power consumption in the open RAN domain.

    Carrier and cell switch off/on rApp:

A simple mechanism to identify within a cell the capacity needed and whether it is possible to reduce the power consumption by switching off frequency layers (carriers) or the entire cell, should sufficient coverage / capacity exist with other adjoining overlapping cells. AI/ML model on the Non- RT RIC might assist in the selection and decision, as well as provide a predictive model. The prediction in this case is key, as one cannot simply switch off a carrier or a cell without gracefully hand over its traffic to an adjoining carrier or cell before to reduce quality of experience negative impact.

    RF Channel reconfiguration rApp:

mMIMO is achieved by the combination of radiating elements to form the beams. A mMIMO antenna array 64 64 transceivers and receivers (64T64R) can be configured to reduce its configuration to 32, 16 or 8 T/R for instance, resulting in a linear power reduction. An AI/ML model can be used to determine the optimal antenna configuration based on immediate and predictive traffic patterns.

Monday, July 7, 2014

Speed = QoE?

I was chairing the LTE World Summit in Amsterdam last week. One of the great presentations made there was by Bouygues Telecom's EVP of Strategy Frederic Ruciak. He presented the operator's strategy for LTE launch in France that led the challenger to the number one market share on LTE in less than one year. He was showing that consumers were not ready to pay for "more speed" because they had been sold the myth of mobile internet too many times. Consumers had been sold wap on GPRS, EDGE, them wireless internet on 3G, HSPA with low satisfaction. Using internet as the reason to upgrade to LTE is a loosing proposition.


One of the mistakes many make in this industry is equating speed with quality of experience (QoE). 


Our quest to increase speed in wireless networks is futile if we do not consider the other side if the coin: service experience. 

For instance, there is always a wave of enthusiasm at the launch of a new radio technology, when few users have access to ample network resources and the services that ride on it are those that were designed for the precious generation. 
I have generation 1 iPad and the latest iPad mini both on wifi. When I bought my first iPad, I had a great browsing experience. Navigation was fluid and fast. Now, it is rare that I am able to have more than 10 minutes browsing without a crash. It is not that the browser is corrupted, just that web pages have grown in size and complexity and when it took 2 seconds to load 10 elements 4 years ago, it now tries to load 40+ elements and inevitably runs out if resource, memory and crashes. The ipad mini is not as bad but not as good as the first generation 4 years ago.

When we are looking at LTE and LTE advanced and soon 5G, it seems that the only "benefit" we are selling as an industry is speed. We tend to infer an improvement in QoE, but it is rarely there. If I used LTE to browse a monochromatic text-based wap site, I am sure that speed would be an improvement in QoE. But no, as LTE is launched, web pages grow in complexity and size, encryption and obfuscation is creeping in, video is graduating from SD to HD to 4K... With video, the problem is even larger as the increase in screen size and definition seems to consistently outpace network speeds.
It becomes harder to sell a new technology if all it does is keeping up or catching up with the service, not improve drastically the user experience.

Wednesday, July 2, 2014

SDN & NFV part IV: testing / monitoring in Wireless Networks

As mentioned (here , here and here), I have been busy working on the various benefits and impacts of implementing virtualized network function in wireless networks.
One problem I have come across lately is the fact that one of the tenet of NFV and SDN is to reduce potential of vendor lock-in at the hardware level. It is true that virtualization of the software allows commercial off the shelf servers to be used in lieu of appliances, for a fraction of the cost of acquisition and operation.
One of the problem that is emerging is the testing, monitoring, troubleshooting and quality assurance of virtualized networks. Vendors in this field have traditionally relied on passive probes performing traffic interception and analysis at various point of the network / interfaces.

In a SDN/NFV world, it becomes difficult to test / monitor / troubleshoot a single service when the resources associated with the service are mutualized, virtualized and elastic.
Right, now most virtualized functions are at the service / product level, i.e. a vendor takes a product, for instance EPC and virtualizes it and its components. The deployment remains monolithic and while there might be elasticity within the solution, the components themselves cannot be substituted. This result in a multi-vendor environment only as far as the large functions are concerned, but not at the component level.
Monitoring and assuring traffic between components become problematic because of the lack of standardization of East-West interfaces.

Testing, monitoring, QA vendor must virtualize their offering through virtualized software probes and taps implemented as virtual network interface cards (vNICs) or switches, but more importantly must deeply integrate with orchestrators, element managers and controllers in order to be able to monitor the creation, instantiation and growth of virtual machines.

This implementation requires the maintenance of a stateful mapping of network functions and traffic flow in order to correlate data and signalling planes.

At this stage, vendors in this field must prepare themselves for a rather long business development engagement in order to penetrate the ecosystem and integrate with each vendor / solution independently. The effort is not unlike one of orchestrators who need to integrate deeply with each network virtual function vendor in order to accurately understand their respective capabilities and to build the VNF catalogue and lifecycle.

As for many in the NFV space, the commercial strategy must evolve as well towards licensing rather than transaction / volume charging. Virtualized network functions will see the number of elements and vendors grow to 100's and 1000's and inevitably, large system integrators will become key single interface to network operators.

Tuesday, July 1, 2014

Mobile network 2030





It is summer, nice and warm. England and Italy are out of the world cup, France will beat Germany on Friday, then Brazil and Argentina in the coming weeks to obtain their second FIFA trophy. It sounds like a perfect time for a little daydreaming and telecom fiction...

The date is February 15, 2030

The mobile world congress is a couple of weeks away and has returned to Cannes, as the attendance and indeed the investments in what used to be mobile networks have reduced drastically over the last few years. Finished are the years of opulence and extravagant launches in Barcelona, the show now looks closer to a medium sized textile convention than the great mass of flashy technology and gadgets it used to be in its heyday. 

When did it start to devolve? What was the signal that killed what used to be a trillion dollar industry in the 90's and early 2000's. As usual, there is not one cause but a sort of convergence of events that took a momentum that few saw coming and fewer tried to stop. 

Net neutrality was certainly one of these events. If you remember, back in 2011, people started to realize the level of penetration fixed and wireless networks were exposed to from legal and illegal interception. Following the various NSA scandals, public pressure mounted to protect digital privacy. 
In North America, the battle was fierce between pro and con neutrality, eventually leading to a status quo of sorts, with many content providers and network operators in an uneasy collaborative dynamic. Originally, content providers unwilling to pay for traffic delivery in wireless networks attempted to secure superior user experience by implementing increasingly bandwidth hungry apps. When these started to come in contention for network resources, carriers started to step in and aggressively throttle, cap or otherwise "optimize" traffic. In reaction, premium content providers moved to an encrypted traffic model as a means to obfuscate traffic and prevent interception, mitigation and optimization by carriers. Soon enough, though, the encryption-added costs and latency proved impractical. Furthermore, some carriers started to throttle and cap all traffic equally, claiming to adhere to the letter of net neutrality, which ended up having a terrible effect on  user experience. In the end cooler heads prevailed and content providers and carriers created integrated video networks, where transport, encryption and ad insertion were performed at the edge, while targeting, recommendation, fulfillment ended up in the content provider's infrastructure. 

In Europe, content and service providers saw at the same time "net neutrality" as the perfect excuse to pressure political and regulatory organizations to force network providers to deliver digital content unfiltered, un-prioritized at best possible effort. The result ended up being quite disastrous, as we know, with content being produced mostly outside Europe and encrypted, operators became true utility service providers. They discovered overnight that their pipes could become even dumber than they were.

Of course, the free voice and texting services launched by some of the 5G licensees new entrants in the 2020's accelerated the trend and nationalization of many of the pan European network operator groups.

The transition was relatively easy, since many had transcended to full virtual networks and contracted ALUSSON the last "european" Telecom Equipment Manufacturer to manage their networks. After they had spent collectively over 100 billion euros to virtualize it in the first place, ALUSSON emerged as the only clear winner of the cost benefits brought by virtualization. 
Indeed, virtualization was attractive and very cost effective on paper but proved very complex and organizationally intensive to implement in the end. Operators had miscalculated their capacity to shift their workforce from telecom engineering to IT when they found out that the skill-set to manage their networks always had been in the vendors' hands. Few groups were able to massively retool their workforce, if you remember the great telco strikes of 2021-2022.
In the end, most ended up contracting and transitioning their assets to their network vendor. Obviously, liberated from the task of managing their network, most were eager to launch new services, which was one of the initial rationale for virtualization. Unfortunately, they found out that service creation was much better implemented by small, agile, young entrepreneurial structures than large, unionized, middle aged ones... With a couple of notable exceptions, broadband networks were written off as broadband access was written in the European countries' constitutions and networks aggregated at the pan European level to become pure utilities when they were not downright nationalized.

Outside Europe and North America, Goopple and HuaTE dominate, after voraciously acquiring licenses in emerging countries, ill-equipped to negotiate the long term values of these licenses versus the free network infrastructures these companies provided. The launch of their proprietary SATERR (Satellite Aerial Terrestrial Relay) technology proved instrumental to creating the first fully vertical service /network/ content / device conglomerates.  

Few were the operators who have been able to discern the importance of evolving their core asset "enabling communication" into a dominant position in their market. Those who have succeeded share a few common attributes:

They realized first that their business was not about counting calls, bites or texts but enabling communication. They first started to think in term of services and not technology and understood that the key was in service enablement. Understating that services come and go and die in a matter of months in the new economy, they strove not to provide the services but to create the platform to enable them.

In some cases, they transitioned to full advertising, personal digital management agency, harnessing big data and analytics to enrich digital services with presence, location, preference, privacy, corporate awareness. This required much changes organizationally, but as it turned out, marketing analyst were much easier and cost effective to recruit than network and telecom engineers. Network management became the toolset, not the vocation. 

In other cases, operators became abstraction layers, enabling content and service providers to better target, advertise, aggregate, obfuscate, disambiguate, contextualize, physical and virtual communication between people and machines.

In all cases they understood that the "value chain" as they used to know it and the consumer need for communication services was better served by an ever changing ecosystem, where there was no "position of strength" and where coopetition was the rule, rather than the exception. 

Wednesday, June 18, 2014

Are we ready for experience assurance? part II

Many vendors’ reporting capabilities are just fine when it comes to troubleshooting issues associated with connectivity or health of their system. Their capability to infer, beyond observation of their own system, the health of a connection or the network is oftentimes limited. 

Analytics, by definition require a large dataset that is ideally covering several systems and elements to provide correlation and pattern recognition on otherwise seemingly random events. With a complex environment like the mobile network, it is extremely difficult to understand what a user’s experience is on their phone. There are means to extrapolate and infer the state of a connection, a cell, a service by looking at network connections fluctuations. 

Traffic management vendors routinely report on the state of a session by measuring the TCP connection and its changes. Being able to associate with that session the device type, time of the day, location, service being used is good but a far cry from analytics.
Most systems will be able to detect if a connection went wrong and a user had a sub-par experience. Being able to tell why, is where analytics’ value is. Being able to prevent it is big data territory.
So what is experience assurance? How does (should) it work?

For instance, a client calls the call center to complain about a poor video experience. The video was sluggish to start with, started 7 seconds after pressing play and started buffering after 15 seconds of playback.
A DPI engine would be able to identify whether TCP and HTTP traffic were running efficiently at the time of the connection.
A probe in the RAN would be able to report a congestion event in a specific location.
A video reporting engine would be able to look at whether the definition and encoding of the video was compatible with the network speed at the time.
The media player in the device would be able to report whether there was enough resources locally to decode, buffer, process and play the video.
A video gateway should be able to detect the connection impairment in real time and to provide the means to correct or elegantly notify of the impending state of the video before the customer experiences a negative QoE.
A big data analytics platform should be able to point out that the poor experience is the result of a congestion in that cell that occurs nearly daily at the same time because the antenna serving that cell is in an area where there is a train station and every day the rush hour brings throngs of people connecting to that cell at roughly the same time.
An experience assurance framework would be able to provide feedback instruction to the policy framework, forcing download, emails and non-real-time data traffic to be delayed to account for short burst of video usage until the congestion passes. It should also allow to decide what the minimum level of quality should be for video and data traffic, in term of delivery, encoding speed, picture quality, start up time, etc… and proactively manage the video traffic to that target when the network “knows” that congestion is likely

Experience assurance is a concept that is making its debut when it comes to data and video services. To be effective, a proper solution should ideally be able to gather real time events from the RAN, the core, the content, the service provider and the device and to decide in real-time what is the nature of the potential impairment, what are the possible course of actions to reduce or negate the impairment or what are the means to notify the user of a sub-optimal experience. No single vendor, to my knowledge is able to achieve this use case, either on its own or through partnerships at this point in time. The technology vendors are too specialized, the elements involved in the delivery and management of data traffic too loosely integrated to offer real experience assurance at this point in time.

Vendors who want to provide experience assurance should first focus on the data. Most systems create event or call logs, registering hundreds of parameters every session, every second. Properly representing what is happening on the platform itself is quite difficult. It is an exercise in interpretation and representation of what is relevant and actionable and what is merely interesting. This is an exercise in small data. Understanding relevance and discriminating good data from over engineered logs is key.


A good experience assurance solution must rely on a strong detection, analytics and traffic management solution. When it comes to video, this means a video gateway that is able to perform deep media inspection and to extract data points that can be exported into a reporting engine. The data exported cannot be just a dump of every event or every session. The reporting engine is only going to be as good as the quality of the data that is fed into it. This is why traffic management products must be designed with analytics in mind from the ground up if they are to be efficiently integrated within an experience assurance framework.

Tuesday, June 17, 2014

Are we ready for experience assurance? part I




As mentioned before, Quality of Experience (QoE) was a major theme in 2012-2013. How to detect, measure and manage various aspects of the customer experience has taken precedence in many cases to savings or monetization rhetoric at vendors and operators alike.

As illustrated in a recent telecoms.com survey, Operators see network quality as the most important differentiator in their market. They would like to implement in their overwhelming majority, business models where they receive revenue share for a guaranteed level of quality.  The problem comes with defining what quality means in a mobile network.


It is clear that many network operators in 2014 have come to the conclusion that they are ill-equipped to understand the consumer’s experience when it comes to data services in general and video in particular. It is not rare that a network operator’s customer care center would receive complaints about the quality of the video service, when no alarm, failure or even congestion has been detected. Obviously, serving your clients when you are blind to their experience is a recipe for churn.

As a result, many operators have spent much of 2013 requesting information and evaluating various vendors’ capability to measure video QoE.  We have seen (here and here) the different type of video QoE measurement. 

This line of questioning has spurred a flurry of product launches, partnerships and announcements in the field of analytics. Here is a list of announcements in the field in the last few months:
  • Procera Networks partners with Avvasi
  • Citrix partners with Zettics and launches ByteMobile Insight
  • Kontron partners with Vantrix and launches cloud based analytics
  • Sandvine launches the Real Time Entertainment Dashboard
  • Guavus partners with Opera Skyfire
  • Alcatel Lucent launches Motive Big Network Analytics
  • Huawei partners with Actix to deliver customer experience analytics…

Suddenly, everyone who has a web GUI and a reporting engine deliver delicately crafted analytics, surfing the wave of big data, Hadoop and NFV as a means to satisfy the operators’ ever growing need for actionable insight.

Unfortunately, in some cases, the operator will find itself with a collection of ill-fitting dashboards providing anecdotic or contradictory data. This is likely to lead to more confusion than problem solving. So what is (should be) experience assurance? The answer in tomorrow's post.


Tuesday, November 5, 2013

Introducing the Mobile Video Alliance

It was a great and unique chance to be invited at the inaugural meeting of the Mobile Video Alliance in London this week. I would like to thank and congratulate Matt Stagg from EE and Rory Murphy from Equinix, who did a great job of bringing together an amazing panel of participants from Akamai, Amazon, BBC, EE, BT, Lovefilm, Netflix, O2, Qualcomm, Sky, Three UK,Vodafone Global and others.

It was an even greater honor to be able to present my views on the future of mobile video and what the ecosystem should focus on to improve the consumer's user experience.

You can find my presentation and the accompanying video below.






In short, it is my first experience of executives from the whole value chain getting together to discuss strategy, business and technology improvements necessary to enhance the consumer's video quality of experience.
Subjects of discussion ranged widely from adaptive bit rate best practice, to transcoding, caching, roaming and data caps, measuring QoE, mobile advertising... in a refreshing neutral, non-competitive environment without vendors trying to push a specific agenda.

The mobile video alliance is a unique forum for the industry to come and solve issues that are plaguing its capacity to grow profitably. Stay tuned, I will follow and report its progress.

Friday, July 5, 2013

The war of machine 2 machine: Internet of nothing?

A recent Tweet conversation got me thinking about all the hoopla about machine-to-machine / internet of everything.

Many telecom equipment manufacturer hail the trend as the next big thing for wireless networks, both a bounty to be harvested and a great opportunity for new revenue streams.

There is certainly a lot to think about when more and more devices that were not designed for real time connectivity are suddenly able to exchange, report, alarm... All these devices that could have well suited rudimentary logging software or technology, most of the time for manual retrieval (think your home gaz, water  or electricity meters being read by a technician) could in the future be eligible for over the air data transfer.

A similar discussion I had at LTE world Summit where I was chairing the data explosion stream comes to mind. A utility company in Italy, I think, had rolled out these "smart" meters. The implementation in labs was flawless, the utility was going to save millions, with only a handful of employees monitoring the data center instead of hundreds scouring the countryside reading manually meters. What was unexpected was that all meters had the same behavior, sending keep-alive and reporting logs at the same time. This brought the wireless network down, in a signalling and payload storm that was self-inflicted.

When I look at all the companies that have created apps with no knowledge of how a phone or a mobile network behaves, I can't help but think about the consequences of meters, cars, irrigation sensors, gaz turbines, fridges and traffic light trying to send snippets of data and signalling through a wireless network with no understanding of how these signals and streams will affect the infrastructure.

This immediately bring to mind horrific headlines: "Sheep herds monitoring device bring down network in New Zealand!". "Water and electricity meters fighting over bandwidth..."

More seriously, it means all these device manufacturers will need to get some serious programmers who understand wireless not only to put the transmitters on the devices but also to code efficiently so that signalling and payload are optimized. Network operators will also need to publish best practices for M2M traffic in term of frequency, amount, etc... with stringent SLAs since most of this traffic will be discrete (subscription paid with service or device, no usage payment).

Wednesday, October 31, 2012

How to monetize mobile video part II

These posts are excerpts from my article in Mobile Europe from October 2012.

The Age Of Video: How Mobile Networks Must Evolve


In 3G, mobile network operators find themselves in a situation where their core network is composed of many complex elements (GGSN, EPC, browsing gateways, proxies, DPI, PCRF…) that are extremely specialized but have been designed with transactional data in mind. Radio access is a scarce resource, with many operators battling with their regulators to obtain more spectrum. The current model to purchase capacity, based on purchasing more base stations, densifying the network is finding its limits. Costs for network build up are even expected to exceed data revenues in the coming years.
On the technical front, some operators are hitting the Shannon’s law, the theoretical limit for spectrum efficiency. Diminishing returns are the rule rather than the exception as the RAN (Radio Access Network) becomes denser for the same available spectrum. Noise and interferences increase.
On the financial front, should an operator follow the demand, it would have to double its mobile data capacity on a yearly basis. The projected revenue increase for data services shows only a CAGR of 20% through 2015. How can operators keep running their business profitably? 
Operationally, doubling capacity every year seems impossible for most networks who look at 3 to 5 years roll out plans. A change of paradigm is necessary.
 Solutions exist and start to emerge. Upgrade to HSPA +, LTE, use smaller cells, changing drastically the pricing structure of the video and social services, network and video optimization, offload part of the traffic to wifi, implement adaptive bit rate, optimize the radio link, cache, use CDNs, imagine new business models with content providers, device manufacturers and operators…

Detect

The main issue is one of network intelligence. Mobile network operators want their network utilization optimized, not minimized. Traffic patterns need to be collected, analyzed, represented so that data and particularly video can be projected, but not at the country, multi-year level as of today. It is necessary to build granular network planning capacity per sector, cell at RAN, Core and Backhaul levels with tools that are video aware. Current DPI and RAN monitoring tools cannot detect video efficiently and analyze it deeply enough to allow for pattern recognition. Additionally, it is necessary to be able to isolate, follow and act on individual video streams on a per subscriber, per service, per property, per CDN level, not simply at the protocol level.
Current mobile network analytics capabilities are mostly inherited from 3G. DPI and traffic management engines rely mostly on protocol analysis and packet categorization to perform their classification and reporting. Unfortunately, in the case of video, this is insufficient. Video takes many forms in mobile networks and is delivered over many protocols (RTSP, RTMP, HTTP, MPEG2TS…). Recognizing these protocols is not enough to be able to perform the necessary next steps. Increasingly, video traffic is delivered over HTTP progressive download. Most current analytics capabilities cannot recognize video as a traffic type today. They rely on url recognition rather than traffic analysis. This leads to issues: how do you differentiate when a user is browsing between YouTube pages from when he is watching a video? How do you discriminate embedded videos in pages? How do you recognize You Tube embedded videos in Facebook? How do you know whether a video is an advertisement or a main programming? How do you know whether a video should be delivered in HD or lower resolution?
It is necessary, in order to categorize and manage video accurately to recognize the video protocol, container, codec, encoding rate, resolution, duration, origin at the minimum to be able to perform pattern recognition.

Measure Experience, not Speed or Size

The next necessary step after identifying and indexing the video traffic is the capacity to grade it from a quality standpoint. As video quality becomes synonymous to network quality in viewers’ mind, mobile network operators must be able to measure and control video quality. Current capabilities in this space are focused on measuring network speed and content size and inferring user satisfaction. This is inadequate
Any hope of monetizing mobile video for mobile network operators beyond byte accounting relies on being able to reliably grade video content in term of quality. This quality measurement is the cornerstone to provide subscribers with the assurance that the content they view is conform to the level of quality they are entitled to. It is also necessary for network operators to establish baseline with content providers and aggregators who view content quality as one of the main elements of pricing.
A uniform Quality of Experience (QoE) measurement standard is necessary for the industry to progress. Today, there is no valid QoE metric for mobile networks, leaving mobile operators relying on sparse proprietary tools, often derived or created for broadcast and professional video, wholly inadequate for mobile networks.  Mobile network operators must be able to measure the QoE per video, subscriber, session, sector, cell, origin, CDN if they want to create intelligent charging models.

Analyze, Segment Consumers and Traffic

Mobile network operators have been segmenting efficiently their customer base, building packages, bundles and price plans adapted to their targets. In the era of video, it is not enough.
Once traffic is identified, indexed, recognized, it is important to segment the population and usage. Is video traffic mostly from premium content providers and aggregators or from free user generated sites? Are videos watched mostly long form or short form? Are they watched on tablets or smartphones? Are they very viral and watched many times or are consumers more following the long tail? All these data and many others are necessary to understand the nature of subscribers’ consumption and will dictate the solutions that are most appropriate. This is a crucial step to be able to control the video traffic.

Control, Manage

Once video traffic is correctly identified and indexed, it becomes possible to manage it. It is a controversial topic as net neutrality as a concept is far from being settled, at least in the mobile world. My view is that in a model were scarcity (spectrum, bandwidth) and costs are borne by one player (operators) while revenue and demand are borne by others (content providers and subscribers), net neutrality is impractical and anti-competitive. Unlike in fixed network, where quasi-unlimited capacity and low entry costs allow easy introduction of content and services, mobile networks’ cost structures and business models are managed systems where demand outgrows capacity and therefore negate equal access to resources. For instance, no one is talking about net neutrality in the context of television.  I believe that operators will be able to discriminate traffic and offer models based on subscribers and traffic differentiation, many already can. It is just a recognition that today, with current setup, traffic gets degraded naturally as demand grows and DPI and traffic management engine are already providing means to shape and direct traffic to everyone’s best interest. No one could think of networks where P2P file sharing traffic could go unchecked and monopolize the network capacity.
Additionally, all videos are not created equal. There are different definitions, sizes, encoding rates. There are different qualities. Some are produced professionally, with big budgets, some are user generated. Some are live, some are file based. Some are downloaded, some are streamed. Some are premium, some are sponsored, some are freemium, some are free… Videos in their diversity bear the key to monetization.
The diversity of videos and their mode of consumption (some prefer to watch HD content in the highest quality, and will prefer download over streaming, others prefer a video that runs uninterrupted, with small load time even with a lesser quality…) is the key to monetization.

Monetize

Mobile network operators must be able to act based on video and subscribers attribute and influence the users’ experience. Being able to divert traffic to other bearers (LTE, Wifi…), to adjust a video quality on the fly are important steps towards creating class of services, not only amongst subscribers but also between content providers.
It is important as well to enable subscribers to select specific quality levels on the fly and to develop the charging tools to provide instant QoE upgrades.
With the capacity to detect, measure, analyze, segment, control and manage, operators can then monetize video. The steps highlighted here provide means for operators to create sophisticated charging models, whereby subscribers, content providers and aggregators are now included in a virtuous value circle.
Operators should explore creating different quality threshold for the video content that transits through their network. It becomes a means to charge subscribers and / or content providers for premium guaranteed quality.

Monday, October 29, 2012

How to monetize mobile video part I


These posts are excerpts from my article in Mobile Europe from October 2012.
Video is a global phenomenon in mobile networks. In less than 3 years, it has exploded, from a marginal use case to dominating over 50% of mobile traffic in 2012.
Mobile networks until 3G, were designed and deployed predominantly for transactional data. Messaging, email, browsing are fairly low impact and lightweight in term of payload and only necessitate speeds compatible with UMTS. Video brings a new element to the equation. Users rarely complain if their text or email arrives late, in fact, they rarely notice. Video provides an immediate feedback. Consumers demand quality and are increasingly assimilating the network’s quality to the video quality.
With the wide implementation of HSPA (+) and the first LTE deployments, together with availability of new attractive smartphones, tablets and ultra book, it has become clear that today’s networks and price structure are ill-prepared to meet these new challenges.

From value chain to value circles: the operators’ broken business model

One of the main reasons why the current models are inadequate to monetize video is the unresolved changes in the value chain. Handset and device vendors have gained much power in the balance lately and many consumers chose first a device or a brand before a network operator. In many cases, subscribers will churn from their current operator if they cannot get access to the latest device. Additionally, device vendors, with the advent of app stores have become content aggregators and content providers, replacing the operators’ traditional value added services.
In parallel, the suppliers of content and services are boldly pushing their consumer relationship to bypass traditional delivery media. These Over-The-Top (OTT) players extract more value from consumers than the access and network providers. This trend accelerates and threatens the fabric itself of the business model for delivery of mobile services.

Mobile video is already being monetized by premium content vendors and aggregators, through subscription, bundling and advertisement. Mobile network operators find themselves excluded from these new value circles overnight while forced to support the burden of the investment. In many cases, this situation is a self-inflicted wound.


Operators have competed fiercely to acquire more subscribers when markets were growing. As mature markets approach saturation, price differentiation became a strong driver to capture and retain subscribers. As 3G was being rolled out in the mid 2000’s, the mobile markets were not yet saturated and mobile network operators business model still revolved around customer acquisition. A favourite tool was the introduction of all-you-can-eat unlimited data plans to accelerate customer acquisition and capture through long term captive contracts. As a result, the customer penetration grew and accelerated with the introduction of smartphones and tablets by 2007. By 2009. Traffic started to grow exponentially.
Data traffic was growing faster than expected: AT&T data traffic grew 80x between 2007 and 2010 and is projected to grow another 10x between 2010 and 2015. Korea Telecom traffic grew 2x in 2010, Softbank (Japan) traffic doubled in 2011, Orange France traffic doubled in 2010 and doubled again in 2011. In 2012, mature operators are trying to acquire smartphone users as it is widely believed that the ARPU (Average Revenue Per User) is much higher (nearly twice) than the one of traditional feature phone subscribers.
The cost to acquire these subscribers is important, as many operators end up subsidizing the devices, and having to significantly increase their network capacity.
At the same time, it appeared that increasingly, consumer data consumptions was changing and that the “bandwidth hogs”, the top 1% that were consuming 30 to 40% of the traffic were now consuming about 20%. They were not consuming less, the average user was consuming a lot more and everyone was becoming a voracious data user.
The price plans devised to make sure the network is fully utilized are backfiring and many operators are now discontinuing all-you-can-eat data plans and subsidizing adoption of limited, capped, metered models.
While 4G is seen as a means to increase capacity, it is also a way for many operators to introduce new charging models and to depart from bundled, unlimited data plans. It is also a chance to redraw the mobile network, to accommodate what is becoming increasingly a video delivery network rather than a voice or data network.


Friday, September 28, 2012

How to weather signalling storms

I was struck a few months back when I heard an anecdote from Telecom Italia about a signalling storm in their network, bringing unanticipated outages. After investigation, the operator found out that the launch of Angry bird on Android had a major difference with the iOS version. It was a free app monetized through advertisement. Ads were being requested and served between each levels (or retry).
 If you are like me, you can easily go through 10 or more levels (mmmh... retries|) in a minute. Each one of these created a request going to the ad server, which generated queries to the subscriber database, location, charging engine over diameter resulting in +351% diameter traffic.
The traffic generated by one chatty app brought the network to its knees withing days of its launch.



As video traffic congestion becomes more prevalent and we see operators starting to measure subscriber's satisfaction in that area, we have seen several solutions emerge (video optimization, RAN optimization, policy management, HSPA +  and LTE upgrades, new pricing models...).
Signalling congestion, by contrast remains an emerging issue. I sat yesterday with Tekelec's Director of Strategic Marketing, Joanne Steinberg to discuss the topic and what should operators do about it.
Tekelec recently (September 2012) released its LTE Diameter Signalling Index. This report projects that diameter traffic will increase at a +252% CAGR until 2016 from 800k to 46 million messages per second globally. This is due to a radical change in applications behavior, as well as the new pricing and business models put in place by operators. Policy management, QoS management, metered charging, 2 sided business models and M2M traffic are some of the culprits highlighted in the report.

Diameter is a protocol that was invented originally to replace SS7 Radius, for the main purposes of Authentication, Authorization and Accounting (AAA). Real time charging and the evolution to IN drove its implementation. The protocol was created to be lighter than Radius, while extensible, with a variety of proprietary fields that could be added for specific uses. Its extensibility was the main criterion for its adoption as the protocol of choice for Policy and Charging functions.
Victim of its success, the protocol is now used in LTE for a variety of tasks ranging from querying subscriber databases (HSS), querying user balance and performing transactional charging and policy traffic.

Tekelec' signaling solutions, together with its policy product line (inherited from the Camiant acquisition), provides a variety of solution to handle the increasing load of diameter signaling traffic and is proposing its "Diameter Signaling Router as a means to manage, throttle, load balance and route diameter traffic".

In my opinion, data browsing is less predictable than voice or messaging traffic when it comes to signalling. While in the past a message at the establishment of the session, one at the end and optionally a few interim updates were sufficient, today sophisticated business models and price plans require a lot of signalling traffic. Additionally, diameter starts to be used to extend outside of the core packet network towards the RAN (for RAN optimization) and towards the internet (for OTT 2 sided business models). OTT content and app providers do not understand the functioning of mobile networks and we cannot expect device and app signalling traffic to self-regulate. While some 3GPP effort is expended to evaluate new architectures and rules such as fast dormancy, the problem is likely to grow faster than the standards' capacity to contain  it. I believe that diameter management and planning is necessary for network operators who are departing from all-you-can eat data plans and policy-driven traffic and charging models.

Monday, July 9, 2012

Edge based optimization part II: Edge packaging

As mentioned in my previous post, as video traffic increases across fixed and mobile networks, innovative companies try to find way to reduce the costs and inefficiencies of transporting large amounts of data across geographies.

One of these new techniques is called edge based packaging and relies on adaptive bit rate streaming. It is particularly well adapted for delivery of live and VOD content (not as much for user-generated content).
 As we have seen in the past, ABR has many pros and cons, which makes the technology useful in certain conditions. For fixed-line content delivery, ABR is useful to account for network variations and provides an optimum video viewing experience. One of the drawback is the cost of operation of ABR, when a video source must be encoded into 3 formats (Flash, Apple and Microsoft) and many target bit rates to accommodate network conditions.


Edge-based packaging allows a server situated in a CDN's PoP in the edge cache to perform manifest manipulation and bit rate encoding directly at the edge. The server accepts 1 file/stream as input and can generate a manifest, rewrap, transmux and protect before delivery. This method can generate great savings on several dimensions.

  1. Backhaul. The amount of payload necessary to transport video is drastically reduced, as only the highest quality stream / file travels between core and edge and the creation of the multiple formats and bit rates is performed at the PoP.
  2. Storage. Only 1 version of each file / stream needs to be stored centrally. New versions are generated on the fly, per device type when accessed at the edge.
  3. CPU. Encoding is now distributed and on-demand, reducing the need for large server farms to encode predictively many versions and formats.
Additionally, this method allows to monetize the video stream:
  1. Advertising insertion. Ad insertion can occur at the edge, on a per stream / subscriber / regional basis.
  2. Policy enforcement. The edge server can enforce and decide QoE/QoS class of services per subscriber group or per type of content / channel.

Edge based packaging provides all the benefits of broadcast with the flexibility of unicast. It actually transforms a broadcast experience in an individualized, customized, targeted unicast experience. It is the perfect tool  to optimize, control and monetize OTT traffic in fixed line networks.

Tuesday, March 6, 2012

GSMAOneAPI: One API to rule them all?


In June 2010, the GSMA released the specifications for its GSMAOneAPI. “A set of APIs that expose network capabilities over HTTP. OneAPI is developed in public and based on existing Web standards and principles. Any network operator or service provider is able to implement OneAPI.”
The API is based on xml/SOAP, its version 2, available since June 2011 includes SMS, MMS, Location and Payments as well as Voice Call Control, Data Connection Profile and Device Capability.

A live pilot implementation is ongoing in Canada with Bell, Rogers and Telus. It provides the capability for a content provider to enable cross network features such as messaging, call and data control. It is powered by Aepona.

The interesting fact about this API is that for the first time, it exposes some data control indication inherent to the core and RAN networks to potential external content providers or aggregators.
I went through an interesting demonstration on the GSMAOneAPI stand at Mobile World Congress 2012 by a small company called StreamOne, out of the Netherlands.

The company uses the API to retrieve from the operator the bearer the device is currently connected on. Additional extensions to the API currently under consideration by GSMA include download speed, upload speed and latency. These data points, when available to the content providers and aggregators could go a great way towards making techniques such as Adaptive Bit Rate more mobile friendly and potentially make way for a real bandwidth negotiation between network and provider. It might be the beginning of a practical approach to two sided business models to monetize quality of experience and service of OTT data traffic. As seen here, ABR is lacking capabilities to provide both operators and content providers with the control they need.





Of course, when looking at the standardization track, these efforts take years to translate into commercial deployments, but the seed is there and if network operators deploy it, if content providers use it, we could see a practical implementation in the next 3-5 years. Whant to know more about practical uses and ABR alternatives, check here.

Monday, March 5, 2012

NSN buoyant on its liquid net

I was with Rajeev Suri, CEO of NSN, together with about 150 of my esteemed colleagues from the press and analyst community on February 26 at Barcelona's world trade center drinking NSN's Kool Aid for 2012. As it turns out, the Liquid Net is not hard to swallow.

The first trend highlighted is about big data, big mobile data that is. NSN's prediction is that by 2020, consumers will use 1GB per day on mobile networks.
When confronted with these predictions, network operators have highlighted 5 challenges:
  1. Improve network performances (32%)
  2. Address decline in revenue (21%)
  3. Monetize the mobile internet (21%)
  4. Network evolution (20%)
  5. Win in new competitive environment (20%)
Don't worry if the total is more than 100%, either it is was a multiple choice questionnaire or NSN's view is that operators are very preoccupied.

Conveniently, these challenges are met with 5 strategies (hopes) that NSN can help with:

  1. Move to LTE
  2. Intelligent networks and capacity
  3. Tiered pricing
  4. Individual experience
  5. Operational efficiency
And this is what has been feeding the company in the last year, seeing sales double to 14B euros in 2011 and turning an actual operating profit of 225m euros. The CEO agrees that NSN is not back yet and more divestment and redundancies are planned (8,500 people out of 17,000 will leave) for the company to reach its long term target of 10% operating profit. NSN expects its LTE market share to double in 2012.

Liquid Net
Liquid networks is the moniker chosen by NSN to answer to the general anxiety surrounding data growth and revenue shrinkage. It promises 1000 times more capacity by 2020 (yes, 1000) and the very complex equation to explain the gain is as follow: 10x more cell sites (figures...), 10 times more spectrum and 10 times more efficiency.

The example chosen to illustrate Liquid net, was I think, telling. NSN has deployed its network at an operator in the UK where it famously replaced Ericsson last summer. It has been able since to detect devices and capabilities and adapt video resolutions with Netflix for instance that resulted in 50% less engorgement in some network conditions. That is hard to believe. Netflix being encrypted, I was scratching my head trying to understand how a lossless technique could reach these numbers.
The overall savings claimed for implementing liquid networks were 65% capacity increase, 30% coverage gain and 35% reduction in TCO.

Since video delivery in mobile networks is a bit of a fixation of mine, I decided to dig up more into these extraordinary claims. I have to confess my skepticism at the outset. I am familiar with NSN, having dealt with the company as a vendor for the last 15 years and am more familiar with its glacial pace of innovation in core networks.

I have to say, having gone through a private briefing, presentation and demonstration, I was surprised by the result. I am starting to change my perspective on NSN and so should you. To find out why and how, you will need to read the write up in my report.

Monday, February 20, 2012

Mobile video QOE part I: Subjective measurement


As video traffic continues to flood many wireless networks, over 80 mobile network operators have turned towards video optimization as a means to reduce the costs associated with growing their capacity for video traffic.
In many cases, the trials and deployments I have been involved in, have shown many carriers at a loss when it comes to comparing one vendor or technology against another. Lately, a few specialized vendors have been offering video QoE (Quality of Experience) tools to measure the quality of the video transmitted over wireless networks. In some cases, the video optimization vendors themselves have as well started to package some measurement with their tool to illustrate the quality of their encoding.
In the next few posts,and in more details, in my report "Video Optimization 2012" I examine the challenges and benefits of measuring  the video QoE in wireless networks, together with the most popular methods and their limitations.
Video QoE subjective measurement
Video quality is a very subjective matter. There is a whole body of science dedicated to provide an objective measure for a subjective quality. The attempt, here, is to rationalize the differences in quality between two videos via a mathematical measurement. It is called objective measurements and will be addressed in my next posts. Subjective measurement on the other hand, is a more reliable means to determine a video’s quality. It is also the most expensive and the most time-consuming technique if performed properly. 
For video optimization, a subjective measurement usually necessitates a focus group who is going to be shown several versions of a video, at different quality (read encoding). The individual opinion of the viewer is recorded in a templatized feedback form and averaged. For this method to work, all users need to see the same videos, in the same sequence, with the same conditions. It means that if the videos are to be streamed on a wireless network, it should be over a controlled environment, so that the same level of QoS is served for the same videos. You can then vary the protocol by having users comparing the original video with a modified version, both played at the same time, on the same device, for instance.
The averaged opinion, the Mean Opinion Score, of each video is then used to rank the different versions. In the case of video optimization, we can imagine an original video encoded at 2Mbps, then 4 versions provided by each vendor at 1Mbps, 750kbps and 500kbps and 250kbps. Each of the subject in the focus group will rank each version from each vendor from 1 to 5, for instance.
The environment must be strictly controlled for the results to be meaningful. The variables must be the same for each vendor, e.g. all performing transcoding in real time or all offline, same network conditions, for all the playback / streams and of course, same devices and same group of users.
You can easily understand that this method can be time consuming and costly, as network equipment and lab time must be reserved, network QoS must be controlled, focus group must be available for the duration, etc...
In that example, the carrier would have each corresponding version from each vendor compared in parallel for the computation of the MOS.  The result could be something like this:
The size of the sample (the number of users in the focus group) and how controlled the environment is, can dramatically affect the result, and it is not rare that you find aberrational results, as in the example above where vendor "a" sees its result increase from version 2 to 3.
If correctly executed, this test can track the relative quality of each vendor at different level of optimization. In this case, you can see that vendor "a" has a high level of perceived quality at medium-high bit rates but performs poorly at lower bit rates. Vendor "b" shows little degradation as the encoding decreases, vendors "c" and "d" show near-linear degradation inversely proportional to the encoding.
In every case, the test must be performed in a controlled environment to be valid. Results will vary sometimes greatly from one vendor to an other, and sometimes with the same vendor at different bit rate, so an expert in video is necessary to create the testing protocol, evaluate the vendors' setup, analyse the results and interpret the scores. As you can see, this is not an easy task and rare are the carriers who have successfully performed subjective analysis with meaningful results for vendor evaluation. This is why, by and large, vendors and carriers have started to look at automatized tools to evaluate existing video quality in a given network,  to compare different vendors and technologies and to measure ongoing perceived quality degradation due to network congestion or destructive video optimization. This will be subject of my next posts.