Showing posts with label content based charging. Show all posts
Showing posts with label content based charging. Show all posts

Wednesday, October 18, 2023

Generative AI and Intellectual Property

Since the launch of ChatGPT, Generative Artificial Intelligence and Large Language Models have gained an extraordinary popularity and agency in a very short amount of time. As we are all playing around with the most approachable use cases to generate texts, images and videos, governments, global organizations and companies are busy developing the technology; and racing to harness the early mover's advantage this disruption will bring to all areas of our society.

I am not a specialist in the field and my musings might be erroneous here, but it feels that the term  Gen AI might be a little misguiding, since a lot of the technology relies on vast datasets that are used to assemble composite final products. Essentially, the creation aspect is more an assembly than a pure creation. One could object that every music sheet is just an assembly of notes and that creation is still there, even as the author is influenced by their taste and exposure to other authors... Fair enough, but in the case of document / text creation, it feels that the use of public information to synthetize a document is not necessarily novel.

In any case, I am an information worker, most times a labourer, sometimes an artisan but in any case I live from my intellectual property. I chose to make some of that intellectual property available license free here on this blog, while a larger part is sold in the form of reports, workshops, consulting work, etc... This work might or not be license-free but it is in always copyrighted, meaning that I hold the rights to the content and allow its distribution under specific covenants.

It strikes me that, as I see crawlers go through my blog and indexing the content I make publicly available, it serves two purposes at odds with each other. The first, allows my content to be discovered and to reach a larger audience, which benefits me in terms of notoriety and increased business. The second, more insidious not only indexes but mines my content to aggregate in LLMs so that it can be regurgitated and assembled by an AI. It could be extraordinarily difficult to apportion an AI's rendition of an aggregated document to its source, but it feels unfair that copyrighted content is not attributed.

I have playing with the idea of using LLM for creating content. Anyone can do that with prompts and some license-free software, but I am fascinated with the idea of an AI assistant that would be able to write like me, using my semantics and quirks and that I could train through reinforcement learning from human feedback. Again, this poses some issues. To be effective, this AI would have to have access to my dataset, the collection of intellectual property I have created over the years. This content is protected and is my livelihood, so I cannot part with it with a third party without strict conditions. That rules out free software that can reuse whatever content you give it to ingest.

With licensed software, I am still not sure the right mechanisms are in place for copyright and content protection and control, so that I can ensure that the content I feed to the LLM remains protected and accessible only to me, while the LLM can ingest other content from license free public domain to enrich the dataset.

Are other information workers worried that LLM/AI reuses their content without attribution? Is it time to have a conversation about Gen AI, digital rights management and copyright?

***This blog post was created organically without assistance from Gen AI, except from the picture created from Canva.com 

Monday, June 8, 2015

Data traffic optimization feature set

Data traffic optimization in wireless networks has reached a mature stage as a technology . The innovations that have marked the years 2008 – 2012 are now slowing down and most core vendors exhibit a fairly homogeneous feature set. 

The difference comes in the implementation of these features and can yield vastly different results, depending on whether vendors are using open source or purpose-built caching or transcoding engines and whether congestion detection is based on observed or deduced parameters.

Vendors tend nowadays to differentiate on QoE measurement / management, monetization strategies including content injection, recommendation and advertising.

Here is a list of commonly implemented optimization techniques in wireless networks.
  •  TCP optimization
    • Buffer bloat management
    • Round trip time management
  • Web optimization
    • GZIP
    •  JPEG / PNG… transcoding
    • Server-side JavaScript
    • White space / comments… removal
  • Lossless optimization
    • Throttling / pacing
    • Caching
    • Adaptive bit rate manipulation
    • Manifest mediation
    • Rate capping
  • Lossy optimization
    • Frame rate reduction
    • Transcoding
      • Online
      • Offline
      • Transrating
    • Contextual optimization
      • Dynamic bit rate adaptation
      • Device targeted optimization
      • Content targeted optimization
      • Rule base optimization
      • Policy driven optimization
      • Surgical optimization / Congestion avoidance
  • Congestion detection
    • TCP parameters based
    • RAN explicit indication
    • Probe based
    • Heuristics combination based
  • Encrypted traffic management
    • Encrypted traffic analytics
    • Throttling / pacing
    • Transparent proxy
    • Explicit proxy
  • QoE measurement
    • Web
      • page size
      • page load time (total)
      • page load time (first rendering)
    • Video
      • Temporal measurements
        • Time to start
        • Duration loading
        • Duration and number of buffering interruptions
        • Changes in adaptive bit rates
        • Quantization
        • Delivery MOS
      • Spatial measurements
        • Packet loss
        • Blockiness
        • Blurriness
        • PSNR / SSIM
        • Presentation MOS


An explanation of each technology and its feature set can be obtained as part of the mobile video monetization report series or individually as a feature report or in a workshop.

Wednesday, June 18, 2014

Are we ready for experience assurance? part II

Many vendors’ reporting capabilities are just fine when it comes to troubleshooting issues associated with connectivity or health of their system. Their capability to infer, beyond observation of their own system, the health of a connection or the network is oftentimes limited. 

Analytics, by definition require a large dataset that is ideally covering several systems and elements to provide correlation and pattern recognition on otherwise seemingly random events. With a complex environment like the mobile network, it is extremely difficult to understand what a user’s experience is on their phone. There are means to extrapolate and infer the state of a connection, a cell, a service by looking at network connections fluctuations. 

Traffic management vendors routinely report on the state of a session by measuring the TCP connection and its changes. Being able to associate with that session the device type, time of the day, location, service being used is good but a far cry from analytics.
Most systems will be able to detect if a connection went wrong and a user had a sub-par experience. Being able to tell why, is where analytics’ value is. Being able to prevent it is big data territory.
So what is experience assurance? How does (should) it work?

For instance, a client calls the call center to complain about a poor video experience. The video was sluggish to start with, started 7 seconds after pressing play and started buffering after 15 seconds of playback.
A DPI engine would be able to identify whether TCP and HTTP traffic were running efficiently at the time of the connection.
A probe in the RAN would be able to report a congestion event in a specific location.
A video reporting engine would be able to look at whether the definition and encoding of the video was compatible with the network speed at the time.
The media player in the device would be able to report whether there was enough resources locally to decode, buffer, process and play the video.
A video gateway should be able to detect the connection impairment in real time and to provide the means to correct or elegantly notify of the impending state of the video before the customer experiences a negative QoE.
A big data analytics platform should be able to point out that the poor experience is the result of a congestion in that cell that occurs nearly daily at the same time because the antenna serving that cell is in an area where there is a train station and every day the rush hour brings throngs of people connecting to that cell at roughly the same time.
An experience assurance framework would be able to provide feedback instruction to the policy framework, forcing download, emails and non-real-time data traffic to be delayed to account for short burst of video usage until the congestion passes. It should also allow to decide what the minimum level of quality should be for video and data traffic, in term of delivery, encoding speed, picture quality, start up time, etc… and proactively manage the video traffic to that target when the network “knows” that congestion is likely

Experience assurance is a concept that is making its debut when it comes to data and video services. To be effective, a proper solution should ideally be able to gather real time events from the RAN, the core, the content, the service provider and the device and to decide in real-time what is the nature of the potential impairment, what are the possible course of actions to reduce or negate the impairment or what are the means to notify the user of a sub-optimal experience. No single vendor, to my knowledge is able to achieve this use case, either on its own or through partnerships at this point in time. The technology vendors are too specialized, the elements involved in the delivery and management of data traffic too loosely integrated to offer real experience assurance at this point in time.

Vendors who want to provide experience assurance should first focus on the data. Most systems create event or call logs, registering hundreds of parameters every session, every second. Properly representing what is happening on the platform itself is quite difficult. It is an exercise in interpretation and representation of what is relevant and actionable and what is merely interesting. This is an exercise in small data. Understanding relevance and discriminating good data from over engineered logs is key.


A good experience assurance solution must rely on a strong detection, analytics and traffic management solution. When it comes to video, this means a video gateway that is able to perform deep media inspection and to extract data points that can be exported into a reporting engine. The data exported cannot be just a dump of every event or every session. The reporting engine is only going to be as good as the quality of the data that is fed into it. This is why traffic management products must be designed with analytics in mind from the ground up if they are to be efficiently integrated within an experience assurance framework.

Tuesday, June 17, 2014

Are we ready for experience assurance? part I




As mentioned before, Quality of Experience (QoE) was a major theme in 2012-2013. How to detect, measure and manage various aspects of the customer experience has taken precedence in many cases to savings or monetization rhetoric at vendors and operators alike.

As illustrated in a recent telecoms.com survey, Operators see network quality as the most important differentiator in their market. They would like to implement in their overwhelming majority, business models where they receive revenue share for a guaranteed level of quality.  The problem comes with defining what quality means in a mobile network.


It is clear that many network operators in 2014 have come to the conclusion that they are ill-equipped to understand the consumer’s experience when it comes to data services in general and video in particular. It is not rare that a network operator’s customer care center would receive complaints about the quality of the video service, when no alarm, failure or even congestion has been detected. Obviously, serving your clients when you are blind to their experience is a recipe for churn.

As a result, many operators have spent much of 2013 requesting information and evaluating various vendors’ capability to measure video QoE.  We have seen (here and here) the different type of video QoE measurement. 

This line of questioning has spurred a flurry of product launches, partnerships and announcements in the field of analytics. Here is a list of announcements in the field in the last few months:
  • Procera Networks partners with Avvasi
  • Citrix partners with Zettics and launches ByteMobile Insight
  • Kontron partners with Vantrix and launches cloud based analytics
  • Sandvine launches the Real Time Entertainment Dashboard
  • Guavus partners with Opera Skyfire
  • Alcatel Lucent launches Motive Big Network Analytics
  • Huawei partners with Actix to deliver customer experience analytics…

Suddenly, everyone who has a web GUI and a reporting engine deliver delicately crafted analytics, surfing the wave of big data, Hadoop and NFV as a means to satisfy the operators’ ever growing need for actionable insight.

Unfortunately, in some cases, the operator will find itself with a collection of ill-fitting dashboards providing anecdotic or contradictory data. This is likely to lead to more confusion than problem solving. So what is (should be) experience assurance? The answer in tomorrow's post.


Monday, May 27, 2013

All bytes are not created equal...



Recent discussions with a number of my clients have brought to light a fundamental misconception. Mobile video is not data. It is not a different use case of data or a particular form of data, it is just a different service. The sooner network operators will understand that they cannot count, measure, control video the same way as browsing data, the sooner they will have a chance to integrate the value chain of delivering video.

Deep packet inspection engines count bytes, categorize traffic per protocol, bearer, URL, throttle and prioritize data flow based on rules that are video-myopic. Their concern is of Quality of Service (QoS) not Quality of Experience (QoE). Policy and charging engines decide meter and limit traffic in real-time based on the incomplete picture painted by DPIs and other network elements.

Not understanding whether traffic is video (or assuming it is video just based on the URL) can prove itself catastrophic for the user experience and their bill. How can traffic management engine instantiate video charging and prioritization rules if they cannot differentiate between download, progressive download, adaptive bit rate? How can they decide what is the appropriate bandwidth for a service if they do not understand what is the encoding of the video, what are the available bit rates, if it is HD or SD, what is the user expectation?

Content providers naturally push a content of the highest quality that the network can afford, smartphone and tablets try and grab as much network capacity available at the establishment of a session to guarantee user experience, often at the detriment of other connections / devices. It is wrong to assume that the quality of experience in video is the result of a harmonious negotiation between content, device and networks.
It is actually quite the opposite, each party pulling in their direction with conflicting priorities.
User experience suffers as a result and we have started to see instances of users complaining or churning due to bad video experience.

All bytes are not created equal. Video weighs heavier and has a larger emotional attachment than email or browsing services when it comes to the user's experience of a network's quality. This is one of the subjects I will be presenting at Informa's Mobile Video Global Summit in Berlin, next week.



Tuesday, November 6, 2012

LTE and video elasticity

I often get asked at events such as Broadband Traffic Management 2012, where I am chairing the mobile video stream this afternoon, "How does video traffic evolves in a LTE network? Won't LTE negate the need for traffic management and video optimization ?".

Jens Schulte-Bockum, CEO of Vodafone Germany shocked the industry last week, indicating that Vodafone Germany traffic in LTE is composed of mobile video for 85%.

I think what most people fail to understand is that video, unlike voice or generic data is elastic. Technologies such as adaptive streaming and source based encoding by content providers means that devices and content providers, given bandwidth will utilize all that is available. 

Device manufacturers implement increasingly aggressive versions of video streaming, grabbing as much bandwidth that is available, independently of video encoding, while content providers tend to offer increasing quality if video, moving from 480p to 720p and 1080p and soon 4K. 
This was corroborated this morning by Eric Klinker, president and CEO of BitTorrent. 

Operators need to understand that video must be managed as an independant service, independently from data and voice as it behaves differently and will "eat up" resources as they are made available.

So the short answer is no, LTE will not solve the issue but rather become a new variable in the equation.

Friday, September 28, 2012

How to weather signalling storms

I was struck a few months back when I heard an anecdote from Telecom Italia about a signalling storm in their network, bringing unanticipated outages. After investigation, the operator found out that the launch of Angry bird on Android had a major difference with the iOS version. It was a free app monetized through advertisement. Ads were being requested and served between each levels (or retry).
 If you are like me, you can easily go through 10 or more levels (mmmh... retries|) in a minute. Each one of these created a request going to the ad server, which generated queries to the subscriber database, location, charging engine over diameter resulting in +351% diameter traffic.
The traffic generated by one chatty app brought the network to its knees withing days of its launch.



As video traffic congestion becomes more prevalent and we see operators starting to measure subscriber's satisfaction in that area, we have seen several solutions emerge (video optimization, RAN optimization, policy management, HSPA +  and LTE upgrades, new pricing models...).
Signalling congestion, by contrast remains an emerging issue. I sat yesterday with Tekelec's Director of Strategic Marketing, Joanne Steinberg to discuss the topic and what should operators do about it.
Tekelec recently (September 2012) released its LTE Diameter Signalling Index. This report projects that diameter traffic will increase at a +252% CAGR until 2016 from 800k to 46 million messages per second globally. This is due to a radical change in applications behavior, as well as the new pricing and business models put in place by operators. Policy management, QoS management, metered charging, 2 sided business models and M2M traffic are some of the culprits highlighted in the report.

Diameter is a protocol that was invented originally to replace SS7 Radius, for the main purposes of Authentication, Authorization and Accounting (AAA). Real time charging and the evolution to IN drove its implementation. The protocol was created to be lighter than Radius, while extensible, with a variety of proprietary fields that could be added for specific uses. Its extensibility was the main criterion for its adoption as the protocol of choice for Policy and Charging functions.
Victim of its success, the protocol is now used in LTE for a variety of tasks ranging from querying subscriber databases (HSS), querying user balance and performing transactional charging and policy traffic.

Tekelec' signaling solutions, together with its policy product line (inherited from the Camiant acquisition), provides a variety of solution to handle the increasing load of diameter signaling traffic and is proposing its "Diameter Signaling Router as a means to manage, throttle, load balance and route diameter traffic".

In my opinion, data browsing is less predictable than voice or messaging traffic when it comes to signalling. While in the past a message at the establishment of the session, one at the end and optionally a few interim updates were sufficient, today sophisticated business models and price plans require a lot of signalling traffic. Additionally, diameter starts to be used to extend outside of the core packet network towards the RAN (for RAN optimization) and towards the internet (for OTT 2 sided business models). OTT content and app providers do not understand the functioning of mobile networks and we cannot expect device and app signalling traffic to self-regulate. While some 3GPP effort is expended to evaluate new architectures and rules such as fast dormancy, the problem is likely to grow faster than the standards' capacity to contain  it. I believe that diameter management and planning is necessary for network operators who are departing from all-you-can eat data plans and policy-driven traffic and charging models.

Wednesday, April 11, 2012

Policy driven optimization

The video optimization market is still young, but with over 80 mobile networks deployed globally, I am officially transitioning it from emerging to growth phase in the technology life cycle matrix.


Mobile world congress brought many news in that segment, from new entrants, to networks announcements, technology launches and new partnerships. I think one of the most interesting trend is in the policy and charging management for video.


Operators understand that charging models based on pure data consumption are doomed to be hard to understand for users and to be potentially either extremely inefficient or expensive. In a world where a new iPad can consume a subscriber's data plan in a matter of hours, while the same subscriber could be watching 4 to 8 times the same amount of video on a different device, the one-size-fits-all data plan is a dangerous proposition.


While the tool set to address the issue is essentially in place, with intelligent GGSNs, EPCs, DPIs, PCRFs and video delivery and optimization engine, this collection of devices were mostly managing their portion of traffic in a very disorganized fashion. Access control at the radio and transport layer segregated from protocol and application, accounting separated from authorization and charging...
Policy control is the technology designed to unify them and since this market's inception, has been doing a good job of coordinating access control, accounting, charging, rating and permissions management for voice and data.


What about video?
The diameter Gx interface is extensible, as a semantics to convey traffic observations and decisions between one or several policy decision points and policy enforcement points. The standards allows for complex iterative challenges between end points to ascertain a session's user, its permissions and balance as he uses cellular services. 
Video was not a dominant part of the traffic when the policy frameworks were put in place, and not surprisingly, the first generation PCRFs and video optimization deployments were completely independent. Rules had to be provisioned and maintained in separate systems, because the PCRF was not video aware and the video optimization platforms were not policy aware.
This led to many issues, ranging from poor experience (DPI instructed to throttle traffic below the encoding rate of a video), bill shock (ill-informed users blow past their data allowance) to revenue leakage (poorly designed charging models not able to segregate the different HTTP traffic).


The next generation networks see a much tighter integration between policy decision and policy enforcement for the delivery of video in mobile networks. Many vendors in both segments collaborate and have moved past the pure interoperability testing to deployments in commercial networks. Unfortunately, we have not seen many proof points of these integration yet. Mostly, it is due to the fact that this is an emerging area. Operators are still trying to find the right recipe for video charging. Standards do not offer guidance for specific video-related policies. Vendors have to rely on two-ways (proprietary?) implementations.


Lately, we have seen the leaders in policy management  and video optimization collaborate much closer to offer solutions in this space. In some cases, as the result of being deployed in the same networks and being "forced" to integrate gracefully, in many cases, because the market enters a new stage of maturation. As you well know, I have been advocating a closer collaboration between DPI, policy management and video optimization for a while (here, here and here for instance). I think these are signs of market maturation that will accelerate concentration in that space. There are more and more rumors of  video optimization vendors getting closer to mature policy vendors. It is a logical conclusion for operators to get a better integrated traffic management and charging management ecosystem centered around video going forward. I am looking forward to discussing these topics and more at Policy Control 2012 in Amsterdam, April 24-25.

Thursday, March 15, 2012

Mobile video optimization 2012: executive summary


As I publish my first report (description here), have an exclusive glance with the below summary.


Executive Summary
V
ideo is a global phenomenon in mobile networks. In only 3 years, it has exploded, from a marginal position (less than 10%) to dominating mobile traffic in 2012 with over 50%.
Mobile networks until now, have been designed and deployed predominantly for transactional data. Messaging, email, browsing is fairly low impact and lightweight in term of payload and only necessitated speed compatible with UMTS. Video brings a new element to the equation. Users rarely complained if their text or email arrived late, in fact, they rarely noticed. Video provides an immediate feedback. Consumers demand quality and are increasingly assimilating the network’s quality to the video quality.

With the wide implementation of HSPA (+) and the first LTE deployments, together with availability of new attractive smartphones, tablets and ultra book, it has become clear that today’s networks and price structure are ill-prepared for this new era.
Handset and device vendors have gained much power in the balance and many consumers chose first a device before a provider.

In parallel, the suppliers of content and services are boldly pushing their consumer relationship to bypass traditional delivery media. These Over-The-Top (OTT) players extract more value from consumers than the access and network providers. This trend accelerates and threatens the fabric itself of the business model for delivery of mobile services.

This is the backdrop of the state of mobile video optimization in 2012. Mobile network operators find themselves in a situation where their core network is composed of many complex elements (GGSN, EPC, browsing gateways, proxies, DPI, PCRF…) that are extremely specialized but have been designed with transactional data in mind. The price plans devised to make sure the network is fully utilized are backfiring and many carriers are discontinuing all-you-can-eat data plans and subsidizing adoption of limited, capped, metered models. Radio access is a scarce resource, with many operators battling with their regulators to obtain more spectrum. The current model to purchase capacity, based on purchasing more base stations, densifying the network is finding its limits. Costs for network build up are even expected to exceed data revenues in the coming years.
On the technical front, many operators are hitting the Shannon’s law, the theoretical limit for spectrum efficiency. Diminishing returns are the rule rather than the exception as RAN become denser for the same available spectrum. Noise and interferences increase.
On the financial front, should an operator follow the demand, it would have to double its mobile data capacity on a yearly basis. The projected revenue increase for data services shows only a CAGR of 20% through 2015. How can operators keep running their business profitably? 
Operationally, doubling capacity every year seems impossible for most networks who look at 3 to 5 years roll out plans.
 Solutions exist and start to emerge. Upgrade to HSPA +, LTE, use femto cells or pico cells, change drastically the pricing structure of the video and social services, offload part of the traffic to wifi, implement adaptive bit rate, optimize the radio link, cache, use CDNs, imagine new business models with content providers, device manufacturers and operators… All these solutions and other are examined in this report.
Video optimization has emerged as one of the technologies deployed to solve some of the issues highlighted above. Deployed in over 80 networks globally, it is a market segment that has generated $102m in 2011 and is projected to generate over $260m in 2012. While it is not the unique solution to this issue, {Core Analysis} believe that most network operators will have to deploy video optimization as a weapon in the arsenal to combat the video invasion in their network. 2009 to 2011 saw the first video optimization commercial deployments, mostly as a defensive move, to shore up embattled networks. 2012 sees video optimization as a means to complement and implement monetization strategies, based on usage metering and control, quality of experience measurement and video class of service delivery.

Thursday, September 15, 2011

Openet's Intelligent Video Management Solution

As you well know, I have been advocating closer collaboration between DPI,   policy management and video optimization for a while (here and here for instance). 


In my mind, most carriers have had to deal in majority with transactional traffic in data until video came along. There are some fundamental differences between managing transactional and flow-based data traffic.The quality of experience of a video service depends as much from the intrinsic quality of the video than the way that video is being delivered.


In a mobile network, with a daisy chain of proxies and gateways (GGSN, DPI, browsing gateway, video optimization engine, caching systems...), the user experience of a streamed video is only going to be as good as the lowest common denominator of that delivery chain.




Gary Rieschick, Director – Wireless and Broadband Solutions at Openet spoke with me today about the Intelligent Video Management Solution launched this week.
"Essentially, as operators are investing in video optimization solutions, they have been asking how to manage video delivery across separate enforcement points. Some vendors are supporting Gx, other are supporting proprietary extensions or proprietary protocols. Some of these vendors have created quality of experience metrics as well, that are used locally, for static rule based video optimization."
Openet has been working with two vendors in the video optimization space to try and harmonize video optimization methods with policy management. For instance, depending on the resulting quality of a video after optimization, the PCRF could decide to zero rate that video if the quality was below a certain threshold.


The main solution features highlighted by Gary are below:
  • Detection of premium content: The PCRF can be aware of agreements between the content provider and operator and provisioned with rules to prioritize or provide better quality to certain content properties.
  • Content prioritization: based on time of day, congestion detection
  • Synchronization of rules across policy enforcement points to ensure for instance that the throttling engine at the DPI level and at the video optimization engine level do not clash.
  • Next hop routing, where the PCRF can instruct the DPI to toute the traffic within the operator network based on what the traffic is (video, mail, P2P...)
  • Dynamic policies to supplement and replace static rules provision in video optimization engine to be reactive to network congestion indications, subscriber profile, etc...


I think it is a good step taken by Openet to take some thought leadership in this space. Operators need help to create a carefully orchestrated delivery chain for video. 
While Openet's solution might work well with a few vendors, i think though, that a real industry effort in standardization is necessary to provide video specific extensions to Gx policy interface.
Delivering and optimizing video in a wireless network results in destructive user experience whenever the control plane enabling feedback on congestion, original video quality, resulting video quality, device and network capabilities is not shared across all policy enforcement and policy decision points.

Friday, September 9, 2011

How to charge for video? part 3 - Pros and Cons

Here are the pros and cons from the methods identified in the previous post.



Pros
Cons
Unlimited usage
Customer friendly, good for acquisition and churn reduction
Hard to plan network capacity
Will be a real differentiator in the future
Expensive, if data usage continues doubling on a yearly basis
Fair Limit
Provides some capacity planning
The limit tends to change often, as the ratio of abuser vs. Heavy users goes down.
Hard Cap
No revenue leakage
Not customer friendly
Easy network planning (max capacity needed = max number of users x caps)
Does not allow to capture additional revenue
Hard cap with overage fee:
Can be very profitable with a population that has frequent overage
Many customers complain of the bill shock.
Soft cap
Customer friendly, easy to understand
Not as profitable in the short term
Soft cap with throttling
A better alternative to hard cap in markets where video usage is not yet very heavy
Becomes less and less customer friendly as video traffic increases
Speed capping
Very effective for charging per type of usage and educating customers
Requires sophisticated network (DPI + Charging + subscriber management)
Application bundling
Popular in mature market with high competition, where subscribers become expert at shopping and comparing the different offerings.
Complex requires sophisticated network, requires good understanding of subscriber demographics and usage to maximize revenue
Metered Usage
Very effective way to ensure that capacity planning and revenue are tied
Not very popular, as many subscribers do not understand Megabytes and how 2 minutes of video could “cost” from 1 to 10 times .
Content based charging
Allow sophisticated tariffing that maximizes revenue
Complex requires sophisticated network, requires good understanding of subscriber demographics and usage to maximize revenue. Technology not quite ready.
Time of day charging
For operators who have a “prime time” effect with peaks an order of magnitude higher than average traffic, an effective way to monetize the need to size for peak.
Not very popular. The network is still underutilized most of the time.
Location based charging
Will allow operators with “hot spots” to try and mitigate usage in these zones or at least to finance capacity.
Most subscribers wont accept having to carry a map to understand how much their call/video will cost them.

As with many trends in wireless, it will take a while before the market matures enough to elaborate a technology and a business model that is both user-friendly and profitable for the operators. Additionally, the emergence of over-the-t0p traffic, with now content providers and aggregators selling their services directly to customers, forces the industry to examine charging and tariffing models in a more fundamental fashion.
Revenue sharing, network sharing, load sharing require traditional core network technologies to be exposed to external entities for a profitable model where brands, content owners, content providers and operators are not at war. New collaboration models need to be thought of. Additionally, while the technology has made much progress, the next generation of DPI, PCRF, OSS/BSS will need to step up to allow for these sophisticated charging models.

Thursday, September 8, 2011

How to charge for video? part 2 - pricing models

While 4G is seen as a means to increase capacity, it is also a way for many operators to introduce new charging models and to depart from bundled, unlimited data plans.
Let’s look at some of the strategies in place for data pricing in a video world:
·         Unlimited usage: This category tends to disappear as data demand increases beyond network capacity. It is still used by new entrants or followers with a disruptive play.
o   Fair limit: even with unlimited packages, many operators tend to enforce a fair limit, usually within 90% of their subscriber’s usage.
·         Capacity capping: this mechanism consists in putting a limit to the subscriber’s capacity to use data on a monthly basis. It is usually associated with a flat monthly fee. It is mostly a defensive measure. Past that limit, the operator has four choices:
o   Hard cap: no data usage is allowed beyond the limit. The subscriber must wait for the next period to use the service anew.
o   Hard cap with overage fee: Once the customer has reached her limit, a fee per metered usage is imposed, traditionally at a very high rate. For instance, 20 € for 2GB and 1 € per additional 10MB
o   Soft cap: The operator introduces several levels of caps and usage and once a customer reaches a cap, she switches to the next one.
o   Soft cap with throttling: The operator throttles the speed of delivery of data past the cap. Usually at a rate that makes it inefficient/impossible to use data intensive applications such as video. It is called as well “trickle-loading”.
·         Speed capping: As video, P2P and download usage becomes close to fixed broadband, operators have started to provide means to measure and charge for different speeds and usage. It allows to create different packages for the type of usage
o   low speed for transactional (email)
o   Medium speed for real time (social network, internet music and radio)
o   High speed for heavy use (downloads and videos)
·         Application bundling: This method consists in grouping applications or usage by bundles with individual tariffing schemes. For instance, free, unlimited IM, Facebook, Twitter, Email at 20$ per month up to 2GB, No P2P...
·         Metered usage: This method consists on charging based on the amount of data consumed monthly by the subscriber.
·         Contextual charging:
o   Content based charging: This is the target of many operators, being able to differentiate between the types of content, origin, quality and create a tariff grid accordingly. For instance: a pricing structure that will have different rates for HD and SD video, whether it is on deck or off deck, whether it is sport or news, live or VOD...
o   Time of day charging: This is a way to make sure that peak capacity is smoothed throughout the day or to get the most margin from busiest times.
o   Location based charging: Still embryonic. Mostly linked to Femtocells deployments.
In my next post, I will look at the pros and cons of each charging model.