Showing posts with label throttling. Show all posts
Showing posts with label throttling. Show all posts

Tuesday, October 3, 2023

Should regulators forfeit spectrum auctions if they cant resolve Net Neutrality / Fair Share?

I have been
writing about Net Neutrality and Fair Share broadband usage for nearly 10 years. Both sides of the argument have merit and it is difficult to find a balanced view represented in the media these days. Absolutists would lead you to believe that internet usage should be unregulated with everyone able to stream, download, post anything anywhere, without respect for intellectual property or fair usage; while on the other side of the fence, service provider dogmatists would like to control, apportion, prioritize and charge based on their interests.

Of course, the reality is a little more nuanced. A better understanding of the nature and evolution of traffic, as well as the cost structure of networks help to appreciate the respective parties' stance and offer a better view on what could be done to reduce the chasm.

  1. From a costs structure's perspective first, our networks grow and accommodate demand differently whether we are looking at fixed line / cable / fibre broadband or mobile. 
    1. In the first case, capacity growth is function of technology and civil works. 
      1. On the technology front, the evolution to dial up / PSTN  to copper and fiber increases dramatically to network's capacity and has followed ~20 years cycles. The investments are enormous and require the deployment, management of central offices and their evolution to edge compute date centers. These investments happen in waves within a relatively short time frame (~5 years). Once operated, the return on investment is function of the number of users and the utilisation rate of the asset, which in this case means filling the network with traffic.
      2. On the civil works front, throughout the technology evolution, a continuous work is ongoing to lay transport fiber along new housing developments, while replacing antiquated and aging copper or cable connectivity. This is a continuous burn and its run rate is function of the operator's financial capacity.
    2. In mobile networks, you can find similar categories but with a much different balance and impact on ROI.
      1. From a technology standpoint, the evolution from 1G to 5G has taken roughly 10 years per cycle. A large part of the investment for each generation is a spectrum license leased from the regulating / government. In addition to this, most network elements, from the access to the core and OSS /BSS need to be changed. The transport part relies in large part on the fixed network above. Until 5G, most of these elements were constituted of proprietary servers and software, which meant a generational change induced a complete forklift upgrade of the infrastructure. With 5G, the separation of software and hardware, the extensive use of COTS hardware and the implementation of cloud based separation of traffic and control plane, should mean that the next generational upgrade will be les expensive with only software and part of the hardware necessitating complete refresh.
      2. The civil work for mobile network is comparable to the fixed network for new coverage, but follows the same cycles as the technology timeframe with respect to upgrades and changes necessary to the radio access. Unlike the fixed network, though, there is an obligation of backwards compatibility, with many networks still running 2G, 3G, 4G while deploying 5G. The real estate being essentially antennas and cell sites, this becomes a very competitive environment with limited capacity for growth in space, pushing service providers to share assets (antennas, spectrum, radios...) and to deploy, whenever possible, multi technology radios.
The conclusion here is that you have fixed networks with long investment cycles and ROI, low margin, relying on number of connections and traffic growth. The mobile networks has shorter investment cycles, bursty margin growth and reduction with new generations.

What does this have to do with Net Neutrality / Fair Share? I am coming to it, but first we need to examine the evolution of traffic and prices to understand where the issue resides.

Now, in the past, we had to pay for every single minute, text, kb received or sent. Network operators were making money of traffic growth and were pushing users and content providers to fill their networks. Video somewhat changed that. A user watching a 30 seconds video doesn't really care / perceive if the video is at 720, 1080 or 4K, 30 or 60 fps. It is essentially the same experience. That same video, though can have a size variation of 20x depending on its resolution. To compound that issue, operators have foolishly transitioned to all you can eat data plans with 4G to acquire new consumers, a self inflicted wound that has essentially killed their 5G business case.

I have written at length about the erroneous assumptions that are underlying some of the discourses of net neutrality advocates. 

In order to understand net neutrality and traffic management, one has to understand the different perspectives involved.
  • Network operators compete against each other on price, coverage and more importantly network quality. In many cases, they have identified that improving or maintaining quality of Experience is the single most important success factor for acquiring and retaining customers. We have seen it time and again with voice services (call drops, voice quality…), messaging (texting capacity, reliability…) and data services (video start, stalls, page loading time…). These KPI are the heart of the operator’s business. As a result, operators tend to either try to improve or control user experience by deploying an array of traffic management functions, etc...
  • Content providers assume that highest quality of content (8K UHD for video for instance) equals maximum experience for subscriber and therefore try and capture as much network resource as possible to deliver it. Browser / apps / phone manufacturers also assume that more speed equals better user experience, therefore try to commandeer as much capacity as possible. 
The flaw here is the assumption that the optimum is the product of many maxima self-regulated by an equal and fair apportioning of resources. This shows a complete ignorance of how networks are designed, how they operate and how traffic flows through these networks.

This behavior leads to a network where resources can be in contention and all end-points vie for priority and maximum resource allocation. From this perspective one can understand that there is no such thing as "net neutrality" at least not in wireless networks. 

When network resources are over-subscribed, decisions are taken as to who gets more capacity, priority, speed... The question becomes who should be in position to make these decisions. Right now, the laissez-faire approach to net neutrality means that the network is not managed, it is subjected to traffic. When in contention, resources are managing traffic based on obscure rules in load balancers, routers, base stations, traffic management engines... This approach is the result of lazy, surface thinking. Net neutrality should be the opposite of non-intervention. Its rules should be applied equally to networks, devices / apps/browsers and content providers if what we want to enable is fair and equal access to resources.

As we are contemplating 6G, and hints of metaverse, augmented / mixed reality and hyper connectivity, the cost structure of network infrastructure hasn't yet been sufficiently decoupled from traffic growth and as we have seen, video is elastic and XR will be a heavy burden on the networks. Network operators have essentially failed so far to offer attractive digital services that would monetize their network investments. Video and digital services providers are already paying for their on premise and cloud infrastructure as well as transport, there is little chance they would finance telco operators for capacity growth.

Where does this leave us? It might be time for regulators / governments to either take an active and balanced role in Net Neutrality and Fair share to ensure that both side can find a sustainable business model or to forfeit spectrum auctions for next generations.

Thursday, November 12, 2015

All you need to know about T-Mobile Binge On




Have you been wondering what is T-Mobile US doing with your video on Binge On?
Here is a small guide and analysis of the service, its technology, features and limitation.

T-Mobile announced at its Uncarrier X event on November 11 the launch of its new service Binge On. The company's CEO remarked that video is the fastest growing data service with +145% compared to 2 years ago and that consumers are increasingly watching video on mobile devices, in wireless networks and cutting the cord from their cable and satellite TV providers. Binge on was created to meet these two market trends.

I have been previewing many of the features launched with Binge on in my video monetization report and my blog posts (here and here on encryption and collaboration) over the last 4 years.


Binge On allows any new or existing subscribers with a 3GB data plan or higher to stream for free videos from a number of apps and OTT properties. Let's examine what the offer entails:

  1. Subscribers with 3GB data plans and higher are automatically opted in. They can opt out at any moment and opt back in when they want. This is a simple mechanism that allows service transparency, but more importantly underpins the claim of Net Neutral service. I have pointed out for a long time that services can be managed (prioritized, throttled, barred...) as long as subscribers opt in for these. Video optimization falls squarely in that category and T-Mobile certainly heeded my advice in that area. More on this later.
  2. Services streaming free in Binge on are: Crackle, DirecTV, Encore, ESPN, Fox Sports, Fox Sports GO, Go90, HBO GO, HBO NOW, Hulu, Major League Baseball, Movieplex, NBC Sports, Netflix, Showtime, Sling Box, Sling TV, Starz, T-Mobile TV, Univision Deportes, Ustream, Vessel, Vevo, VUDU.
  3. You still have to register / subscribe to the individual services to be able to stream free on T-Mo network.
  4. Interestingly, no Google properties (YouTube) or Facebook included yet. Discussions are apparently ongoing.
  5. These OTT video services maintain their encryption, so the content and consumer interactions are safe. 
  6. There were mentions of a mysterious "T-Mobile proprietary streaming technology and video optimization" that requires video service providers to integrate with T-Mobile. This is not transcoding and relies on adaptive bit rate optimization, ranging from throttling data to transrating, to manifest manipulation (ask video providers to enable un-encrypted manifest so that it can be edited and limited to 480p definition).
  7. Yep, video is limited at 480p definition, which T-Mobile defines as DVD quality. It's going to look good on a smartphone, ok on a tablet and bad on anything bigger / tethered.
  8. I have issue with the representation "We've optimized streaming so that you can watch 3x more video" because mostly it's: 
  9. File size per hour of streamed video per definition
    1. Inaccurate (if this is unlimited, how can unlimited be 3x what you are currently watching?); 
    2. Inexact (if they are referring to the fact that a 480p file could in average be 1/3 of the size of a 1080p file, which is close enough), they are assuming wrongly that you are only watching HD 1080p video, while most of these providers rely on adaptive bit rate, therefore varying the video definition based on the networks' conditions.
    3. Wrong since most people assume watching 3X more video means spending 3X the amount of time watching video, rather than 3X the file size.
    4. Of bad faith, since T-Mobile limited video definition so that users wouldn't kill its network. Some product manager / marketing drone decided to turn this limitation into a feature...
  10. Now in the fine prints, on the rest of the video you watch that are not part of the package, expect that "Once high-speed data allotment is reached, all usage slowed to up to 2G speeds until end of bill cycle." 2G speed? for streaming video?  like watching animated GIF? That's understandable, though, there has to be an carrot (and a stick) for providers who have not joined yet, as well as some fair usage rules for subscriber breaching their data plans - but 2G speed? come on, might as well stop the stream rather than pretend that you can stream anything on 128 kbps.
  11. More difficult to justify is the mention "service might be slowed, suspended, terminated, or restricted for misuse, abnormal use, interference with our network or ability to provide quality service to other users". So basically, there is no service level agreement for minimum quality of service. Ideally, if a video service is limited to 480p (when you are paying Netflix, etc. for 1080p or even 4K, let's remember), one should expect either guaranteed level or a minimum quality floor?
  12. Another vague and spurious rule is "Customers who use an extremely high amount of data in a bill cycle will have their data usage de-prioritized compared to other customers for that bill cycle at locations and times when competing network demands occur, resulting in relatively slower speeds. " This is not only vague and subjective, it will vary over time depending on location (with a 145% growth in 2 years, an abnormal video user today will be average tomorrow). More importantly, it goes against some of the net neutrality rules
T-Mobile innovates again with a truly new approach to video services. Unlike Google's project Fi, it is a bold strategy, relying on video optimization to provide a quality ceiling, integration with OTT content providers to enable the limitation but more importantly an endorsement of the service. It is likely that the service will be popular in terms of adoption and usage, it will be interesting to see, as its user base grows how user experience will evolve over time. At least, there is now a fixed ceiling for video, which will allow for network capacity planning, removing variability. What is the most remarkable in the launch, from my perspective is the desire to innovate and to take risks by launching a new service, even if there are some limitations (video definition, providers...) and risks (net neutrality).

Want to know more about how to launch a service like Binge on? What technology, vendors, price models...? You can find more in my video monetization reports and workshop.

Monday, June 8, 2015

Data traffic optimization feature set

Data traffic optimization in wireless networks has reached a mature stage as a technology . The innovations that have marked the years 2008 – 2012 are now slowing down and most core vendors exhibit a fairly homogeneous feature set. 

The difference comes in the implementation of these features and can yield vastly different results, depending on whether vendors are using open source or purpose-built caching or transcoding engines and whether congestion detection is based on observed or deduced parameters.

Vendors tend nowadays to differentiate on QoE measurement / management, monetization strategies including content injection, recommendation and advertising.

Here is a list of commonly implemented optimization techniques in wireless networks.
  •  TCP optimization
    • Buffer bloat management
    • Round trip time management
  • Web optimization
    • GZIP
    •  JPEG / PNG… transcoding
    • Server-side JavaScript
    • White space / comments… removal
  • Lossless optimization
    • Throttling / pacing
    • Caching
    • Adaptive bit rate manipulation
    • Manifest mediation
    • Rate capping
  • Lossy optimization
    • Frame rate reduction
    • Transcoding
      • Online
      • Offline
      • Transrating
    • Contextual optimization
      • Dynamic bit rate adaptation
      • Device targeted optimization
      • Content targeted optimization
      • Rule base optimization
      • Policy driven optimization
      • Surgical optimization / Congestion avoidance
  • Congestion detection
    • TCP parameters based
    • RAN explicit indication
    • Probe based
    • Heuristics combination based
  • Encrypted traffic management
    • Encrypted traffic analytics
    • Throttling / pacing
    • Transparent proxy
    • Explicit proxy
  • QoE measurement
    • Web
      • page size
      • page load time (total)
      • page load time (first rendering)
    • Video
      • Temporal measurements
        • Time to start
        • Duration loading
        • Duration and number of buffering interruptions
        • Changes in adaptive bit rates
        • Quantization
        • Delivery MOS
      • Spatial measurements
        • Packet loss
        • Blockiness
        • Blurriness
        • PSNR / SSIM
        • Presentation MOS


An explanation of each technology and its feature set can be obtained as part of the mobile video monetization report series or individually as a feature report or in a workshop.

Tuesday, March 31, 2015

Net neutrality... so what?


[...] 
In the US, on February 26, days before the mobile world congress, the Federal Communications Commission released a declaratory ruling on “protecting and promoting the open internet”. The reclassification of fixed and mobile network services under title II telecom services by the FCC means in substance that network operators will be prevented from blocking, throttling, and prioritizing traffic and will have to be transparent in the way their traffic management rules are applied.  This is essentially due to an earlier ruling from the DC circuit Verizon v. FCC that struck down FCC’s rules against blocking and traffic discrimination but remarked that “broadband providers represent a threat to Internet openness and could act in ways that would ultimately inhibit the speed and extent of future broadband deployment.”

It is a great issue that broadband providers in this case are exclusively network operators, and not OTT providers, who have, in my mind the same capacity and have a similar track record in that matter. The FCC tried to provide “more broadband, better broadband and open broadband” and in its haste has singled out one party of the ecosystem, essentially condemning network operators to a utility model. This nearsightedness is unlikely to continue as several companies have already decided to challenge it. Less than a month after its publication, the order is being challenged in court by the United States Telecom Association, a lobbying group representing the broadband and wireless network operators as well as Alamo, broadband provider in Louisiana. There is no doubt that legal proceedings will occupy and fatten lawyers on both sides for years to come.

In Europe, the net neutrality debate is also far from being settled. After the European Commission seemed to take a no throttling, no blocking no prioritization stance in its “Digital single market” initiative, network operators, throughout their lobbying arm ETNO (European Telecommunications Network Operators’ association) started to challenge these provisions at the country level. Since the European Commission has not yet passed a law on the subject, the likeliness of a strong net neutrality stance will depend on support from each nation. In November 2014, compromises in the form of “non-discriminatory and proportionate” plans were discussed. The result is that net neutrality is still very much a moving target, with a lot of efforts being expanded to enable a managed internet experience, with a fast and a best effort lane. The language and ideas surrounding net neutrality is very vague suggesting either a great lack of technical expertise or a reluctance to provide an enforceable guidance (or both). It is more likely that countries at their individual level will start passing law to regulate some aspects of traffic management until a consensus is found at the European level.


In conclusion, there is obviously much debate over net neutrality globally, with many emotional, commercial, technical implications. There is at this stage no evidence of any regulatory authority having a good grasp of both the technical and commercial realities today to make a fair and enforceable ruling. As a result, politics, public sentiments, lobbying and lawyers will dictate the law for the next 5 years. In the meantime, it is likely that loopholes will be found and that collaborative approaches will show a lucrative business model that is likely to make the whole debate obsolete.

More analysis on traffic encryption, mobile advertising, data, video, mobile and media trends in  "Mobile video monetization 2015". 

Monday, March 25, 2013

Video optimization 2013: Executive summary





Video accounts for over 50% of overall data traffic in mobile networks in 2013 and its compounded annual growth rate is projected at 75% over the next 5 years. Over 85% of the video traffic is generated by OTT properties and mobile network operators are struggling to accommodate the demand in a profitable fashion. New business models are starting to emerge, together with KPIs and technologies such as video optimization to manage, control and monetize OTT video traffic. This is the backdrop for this report



In September of 2012, Jens Schulte-Bockum, CEO Vodafone Germany shocked the industry in announcing that the 10% of their customer base who have elected to shift to their LTE network had a fundamentally different usage pattern than their 3G counterparts:
Voice, text, other messaging and data - everything that makes money for us - uses less than 15%. The bit that doesn’t make money uses 85% of the capacity. Clearly we are thinking about how we can monetise that. ”
“The bit that does not make money for us” is mobile OTT video.
The Bundesnetzagentur (BNetzA) Germany’s telecom regulator has mandated that the roll out of LTE be first in rural areas, before covering urban centres, thus ensuring a quasi 100% geographical coverage at launch. While many point out that the 85% of video transiting through the 4G network are a manifestation of cord cutting, it is not the exclusive use and remains a valid LTE use case.
2012 was the first year video was responsible for over half of global mobile data traffic. Over 85% of that video traffic is OTT, generating little revenue for mobile network operators.
As 4G deployments roll out across the globe, many network operators had envisioned that this additional capacity was sufficient to bridge the video traffic growth, allowing enough headroom for the creation and roll-out of new services. The exponential growth of video usage, encouraged by the increasing penetration of large screen devices, the introduction of higher definition content and the growth in adaptive streaming technology, is not likely to abate. It looks like by the time LTE has reached mass market penetration, many networks will find themselves still congested, with an unbalance cost / revenue structure due to the predominance of OTT video.
In reaction to this threat, many mobile network operators transitioned generous unlimited data plans to more granular charging methods, oftentimes implementing throttling and caps to reduce unprofitable traffic growth. These methods were implemented with various results but little success in monetizing OTT video traffic without alienating the consumer.
New technologies have made their debut, such as small cells, heterogeneous network management, traffic offload, edge caching, edge packaging, traffic shaping, cloud-based virtualized network functions… and new business models are starting to emerge, reinventing relationships between network operators, content providers, and device manufacturers.
Video optimization in 2013 is a mature market segment, deployed in over 150 networks globally, it has generated over $260m in 2012 and is projected to generate close to $390m in 2013. Video optimization was, in its first instance sold as a means to reduce video volume, thus potentially deferring investment costs for network build out. It was a wrong assumption, as most deployments in congested networks saw no reduction in volume and little deferment of investment. In most case, the technology allowed more users to occupy the network in congested areas. A new generation of products and vendors are starting to emerge, to manage the video experience in a more nimble, granular fashion.
{Core Analysis} believe that video optimization will continue to be deployed in most networks as a means to control and manage the video traffic. 

Wednesday, December 21, 2011

Allot to acquire Flash Networks for $110 /$120 M?

This is the latest rumor from Globe. Allot, who has raised almost $80M a month ago and was rumored to be acquired by F5, then to discuss acquisition of Mobixell or PeerApp last year, has a $500M market cap. Flash Networks has raised over $61M.

The resulting company could be booking about $120M in sales and be profitable.

Allot, in a briefing with Jonathon Gordon, Director of Marketing, two weeks ago was noting: " Our policies focus more and more on revenue generation. With over 100 charging plans surveyed in our latest report, we see more and more demand for bundle plans for social networks and video. We can already discriminate traffic that is embedded, for instance, we can see that a user is watching a video within a facebook browsing session, but we cannot recognize and analyse the video in term of format, bit rate, etc...Premium video specific policies raise a lot of interest these days."

No doubt, the acquisition of an optimization vendor like Flash Networks can solve that problem, by creating a harmonious policy and charging function that actually manages video, which accounts for over half of 2011 mobile traffic globally.

As discussed here and here, video optimization becomes an attractive target for telco vendors who want to extend beyond DPI and policy. Since video is such a specialized skill, it is likely that growth in this area will not be organic. It is likely that the browsing gateway / DPI / PCRF / Optimization segments will collapse over the next 2 years, as they are atomized markets, with small, technology-driven under-capitalized companies and medium -to-large mature companies looking to increase market share or grow the top line.


Wednesday, November 30, 2011

Mobixell update and EVO launch

Mobixell was founded in December of 2000 to focus on mobile multimedia adaptation. Their first product, launched in 2002, was for MMS (Multimedia Messaging) adaptation and was sold through OEMs such as Huawei, Ericsson, NSN and others. It launched a mobile TV platform in 2008, and a mobile video optimization product in 2010. Along the way, Mobixell acquires Adamind in 2007, and 724 Solutions in 2010.


Mobixell has 16% market share of the deployed base of video optimization engines. Nearly 18 months after the launch of the video optimization module in their Seamless Access product suite, Mobixell launches EVO (for Evolved Optimization).


As a follow-up from the 360 degrees review of the video optimization market and in anticipation of the release of my market report, I had a recent chat with Yehuda Elmaliach, CTO and co-founder at Mobixell about their recent announcement, introducing Mobixell EVO.


"We wanted to address the issue of scalability and large deployments in video optimization in a new manner. As traffic grows for Gbps to 10's and 100's of Gbps, we see optimization and particularly;y real-time transcoding as a very CPU intensive activity, which can require a lot of CAPEX. The traditional scaling model, of adding new blades, chassis, sites does not make sense economically if traffic grows according to projections."
Additionally, Yehuda adds "We wanted to move away from pure volume reduction, as a percentage saving of traffic across the line to a more granular approach, focusing on congestion areas and peak hours."


Mobixell EVO is an evolution of Seamless Access video optimization that complements Mobixell capabilities with cloud-based services and benefits. The current Seamless Access product sits on the Gi Interface, after the GGSN and performs traffic management, shaping and video optimization. The video optimization features at that level are real-time transcoding, dynamic bit rate adaptation, offline transcoding and caching. Mobixell EVO proposes to complement or replace this arrangement with a cloud-based implementation that will provide additional computational power and storage in an elastic and cost effective manner for real time transcoding and for a hierarchical caching system.


Yehuda adds: "We have launched this product based on customer feedback and demand. We do not see customers moving their infrastructure to the cloud only for the purpose of optimization, but for those who already have a cloud strategy, it fits nicely. EVO is built on the principles of virtualization, geometric and automatic scalability and self replication to take advantage of the cloud architecture. "


An interesting development for Mobixell. EVO has no commercial deployment yet and is planned to be generally available in Q2 2012 after current ongoing trials and proof of concepts. Mobixell sees this platform being deployed first with carriers private clouds, then maybe using mixed private and public clouds. The idea is a waterfall implementation, where routine optimization is performed at the Gi level, then moves to private cloud or public ones as peak and surges appear on the network. The idea has a certain elegance, particularly for operators that experience congestion in a very peaky, localized manner. In that case a minimum investment can be made on Gi and complemented with cloud services as peaks reach certain thresholds. It will be interesting to see if Mobixell can live up to the promises of EVO, as security, bandwidth, latency and scalability can reduce the benefits of a mixed core / cloud implementation if not correctly addressed.
Mobixell is the second vendor to launch cloud based optimization after Skyfire.

Monday, November 21, 2011

MTS Russia selects Flash Networks



The deal was won last year, after a RFP shortlisting 4 major vendors. A trial in Murmansk followed the selection of Flash Networks. The solution is being deployed commercially and will be live at the end of 2011.


This is the first major announcement from Flash Networks in the video optimization space, confirming the conversion and acquisition of some of their customers from web optimization to video optimization.


“Using the data optimization platform allows us to reduce our mobile network data transmission load by almost 40% and our transit load by 30%, ultimately resulting in faster Internet speeds and better quality of data services for our users”said Sergei Stepanyuk, Head of Data Transmission Department at MTS. 


full release here.

Thursday, September 15, 2011

Openet's Intelligent Video Management Solution

As you well know, I have been advocating closer collaboration between DPI,   policy management and video optimization for a while (here and here for instance). 


In my mind, most carriers have had to deal in majority with transactional traffic in data until video came along. There are some fundamental differences between managing transactional and flow-based data traffic.The quality of experience of a video service depends as much from the intrinsic quality of the video than the way that video is being delivered.


In a mobile network, with a daisy chain of proxies and gateways (GGSN, DPI, browsing gateway, video optimization engine, caching systems...), the user experience of a streamed video is only going to be as good as the lowest common denominator of that delivery chain.




Gary Rieschick, Director – Wireless and Broadband Solutions at Openet spoke with me today about the Intelligent Video Management Solution launched this week.
"Essentially, as operators are investing in video optimization solutions, they have been asking how to manage video delivery across separate enforcement points. Some vendors are supporting Gx, other are supporting proprietary extensions or proprietary protocols. Some of these vendors have created quality of experience metrics as well, that are used locally, for static rule based video optimization."
Openet has been working with two vendors in the video optimization space to try and harmonize video optimization methods with policy management. For instance, depending on the resulting quality of a video after optimization, the PCRF could decide to zero rate that video if the quality was below a certain threshold.


The main solution features highlighted by Gary are below:
  • Detection of premium content: The PCRF can be aware of agreements between the content provider and operator and provisioned with rules to prioritize or provide better quality to certain content properties.
  • Content prioritization: based on time of day, congestion detection
  • Synchronization of rules across policy enforcement points to ensure for instance that the throttling engine at the DPI level and at the video optimization engine level do not clash.
  • Next hop routing, where the PCRF can instruct the DPI to toute the traffic within the operator network based on what the traffic is (video, mail, P2P...)
  • Dynamic policies to supplement and replace static rules provision in video optimization engine to be reactive to network congestion indications, subscriber profile, etc...


I think it is a good step taken by Openet to take some thought leadership in this space. Operators need help to create a carefully orchestrated delivery chain for video. 
While Openet's solution might work well with a few vendors, i think though, that a real industry effort in standardization is necessary to provide video specific extensions to Gx policy interface.
Delivering and optimizing video in a wireless network results in destructive user experience whenever the control plane enabling feedback on congestion, original video quality, resulting video quality, device and network capabilities is not shared across all policy enforcement and policy decision points.

Tuesday, July 5, 2011

BBTM Part 4:TIM & BitTorrent

TIM
Telecom Italy is facing the same issue most mature operators see today:
  • Mobile video traffic is growing explosively, threatening to overcome current capacity
  • LTE is a few years away and requires a completely new network overlay
  • The introduction of tablets and smartphone is accelerating the phenomenon
Additionally, TIM is lobbying GSMA to implement fast dormancy directives so that device manufacturers and apps can optimize signalling by batch sending messages rather than on an ad hoc basis.
End to end QoS via CDN interconnection and QoS guaranteed on a private backbone (IPX) is high on their agenda for video services.

TIM is answering these issues in a somewhat classic manner, introducing fair usage caps (daily, monthly), throttling, video optimization and policy management. The innovative part is in the introduction of tiered QoS (speed, duration) per class of service, urging the subscribers to select the speed and capacity the most adapted to their current or projected usage.


An interesting data point from TIM's presentation is related to signalling congestion. In many cases, signalling is as much an issue as actual bandwidth in congested network. Signalling is not only a function of the number of subscribers in a cell, but also the type of device and type of apps being used. For instance, Angry Birds on Android  represents a +351% signalling increase compared to the iOS version, due to in-app advertising. The app polls and displays an ad at each level change, creating signalling overload.

 
 
BitTorrent
Eric Klinker, CEO of BitTorrent, walked in the room like a man with a target on his back. Seen as many as a powerful threat to the business model of content owners and telcos globally, BitTorrent is now advocating the use of their technology (mTorrent) as a highly scalable, secure way to transfer files, with a priority.
The plan for world domination means the replacement of TCP by P2P transfer, to allow capacity for the rest of the traffic.

 

What is interesting, is that BitTorrent has worked and is looking to work increasingly with carriers to help with P2P bandwidth consumption and traffic steering. For instance, in New Zealand,  BitTorrent works with Telecom New Zealand to prioritize to prioritize peer traffic on the island, reducing offshore traffic and associated costs .
Another example of opportunities for policies to transcend the core network, towards content and app providers.

Monday, May 16, 2011

Mobile video 102: lossless and lossy compression

Mobile video as a technology and market segment can at times be a little complicated.
Here is simple syllabus, in no particular order of what you need to know to be conversant in mobile video. It is not intended to be exhaustive or to be very detailed, but rather to provide a knowledge base for those interested in understanding more the market dynamics I address in other posts.

Compression (lossless) and optimization (lossy)
  • Compression is the action of reducing the size of the representation of a media object without loosing data. It is lossless when, after decompression, a compressed media is absolutely equal to the original. Compression methods are based on statistical analysis, to represent recurrent data items within a file. PNG, GIF, Zip, gzip and deflate are lossless compression formats. Throttling, just-in-time delivery and caching are lossless delivery methods.
  • Optimization is a form of compression called lossy in the sense that it discards data elements to achieve reduced size. The optimized version is not identical to the original. Transcoding, transrating are lossy methods.
Lossy optimization methods:
  • Frame per second (fps)A video is composed of  a number of still frames (pictures). The illusion of movement is achieved after 15 frame per second. TV is 24 to 30 fps (depending on the standard and whether it is progressive or interlaced). Many lossy optimization method will reduce the frame per second ratio in order to reduce the size of a file.
    • key frame: Not all frames contain the same amount of data in video optimization. The main way to reduce the quantity of information in a video is to use statistical analysis to predict motion. In other words, analyse differences between one frame to the following. Most optimization method will model only the difference between a frame and the next one, therefore not coding all the information. Key frames or Intra frames are the frames used as reference. When lossy optimization is performed using fps reduction, one has to be careful not to remove the key frames or the user experience will be garbled with many artifacts.
  • Bit rates: Bit rate is the rate at which a video is encoded (quality) or transmitted. 
    • Encoding bit rate:The encoding bit rate represent the amount of information that is captured in each frame. It is measured in kbps or (kilobit per second) Mbps (Megabit per second). HD video is encoded at 20Mbps, SD at 10 Mbps, internet video usually around 1Mbps and video transmitted on wireless between 200 and 700 kbps.
      • Variable bit rate (VBR) or transrating is a lossy optimization method that will vary the encoding bit rate throughout the video to take into account lossy network conditions.
      • Constant bit rate is used for broadcast, fixed line connection and generally lossless transmissions.
    • Delivery bit rate: When a video is transmitted over a wireless network, the connection capacity dictates the user experience. The bit rate of delivery should always exceed the bit rate of encoding of the video for smooth viewing. If the delivery bit rate goes below the encoding bit rate, buffering, stop and go is experienced. Lossy optimization techniques such as VBR allow to reduce the encoding bit rate in real time, as the delivery bit rate varies.
  • Transcoding is the action of decoding a video file and recoding it under a different format. This lossy method is effective to reduce a video file from a definition, format that are not suitable for mobile transmission (HD, 3D...). Additionally, a lot of size saving can be operated by changing the aspect ratio (4:3 or 16:9 from TV) or changing the picture size (HD 1080 is 1920 x 1080 pixels, while most smartphones WVGA are 800 x 480 pixels). You can reduce drastically a video file size by changing the picture size.
  • Transprotocol is the action to change the protocol used for video transmission. For instance, many legacy phones that do not support progressive download cannot access internet video unless they are transprotocoled to RTSP.