Showing posts with label transcoding. Show all posts
Showing posts with label transcoding. Show all posts

Thursday, May 2, 2024

How to manage mobile video with Open RAN

Ever since the launch of 4G, video has been a thorny issue to manage for network operators. Most of them had rolled out unlimited or generous data plans, without understanding how video would affect their networks and economics. Most videos streamed to your phones use a technology called Adaptive Bit Rate (ABR), which is supposed to adapt the video’s definition (think SD, HD, 4K…) to the network conditions and your phone’s capabilities. While this implementation was supposed to provide more control in the way videos were streamed on the networks, in many cases it had a reverse effect.

 

The multiplication of streaming video services has led to ferocious competition on the commercial and technological front. While streaming services visibly compete on their pricing and content attractiveness, a more insidious technological battle has also taken place. The best way to describe it is to compare video to a gas. Video will take up as much capacity in the network as is available.

When you start a streaming app on your phone, it will assess the available bandwidth and try to deliver the highest definition video available. Smartphone vendors and streaming providers try to provide the best experience to their users, which in most cases means getting the highest bitrate available. When several users in the same cell try to stream video, they are all competing for the available bandwidth, which leads in many cases to a suboptimal experience, as some users monopolize most of the capacity while others are left with crumbs.

 

In recent years, technologies have emerged to mitigate this issue. Network slicing, for instance, when fully implemented could see dedicated slices for video streaming, which would theoretically guarantee that video streaming does not adversely impact other traffic (video conferencing, web browsing, etc…). However, it will not resolve the competition between streaming services in the same cell.

 

Open RAN offers another tool for efficiently resolving these issues. The RIC (RAN Intelligent Controller) provides for the first time the capability to visualize in near real time a cell’s congestion and to apply optimization techniques with a great level of granularity. Until Open RAN, the means of visualizing network congestion were limited in a multi-vendor environment and the means to alleviate them were broad and coarse. The RIC allows to create policies at the cell level, on a per connection basis. Algorithms allow traffic type inference and policies can be enacted to adapt the allocated bandwidth based on a variety of parameters such as signal strength, traffic type, congestion level, power consumption targets…

 

For instance, an operator or a private network for stadiums or entertainment venues could easily program their network to not allow upstream videos during a show, to protect broadcasting or intellectual property rights. This can be easily achieved by limiting the video uplink traffic while preserving voice, security and emergency traffic.

 

Another example would see a network actively dedicating deterministic capacity per connection during rush hour or based on threshold in a downtown core to guarantee that all users have access to video services with equally shared bandwidth and quality.

 

A last example could see first responder and emergency services get guaranteed high-quality access to video calls and broadcasts.

 

When properly integrated into a policy and service management framework for traffic slicing, Open RAN can be an efficient tool for adding fine grained traffic optimization rules, allowing a fairer apportioning of resource for all users, while preserving overall quality of experience.

 

Monday, June 8, 2015

Data traffic optimization feature set

Data traffic optimization in wireless networks has reached a mature stage as a technology . The innovations that have marked the years 2008 – 2012 are now slowing down and most core vendors exhibit a fairly homogeneous feature set. 

The difference comes in the implementation of these features and can yield vastly different results, depending on whether vendors are using open source or purpose-built caching or transcoding engines and whether congestion detection is based on observed or deduced parameters.

Vendors tend nowadays to differentiate on QoE measurement / management, monetization strategies including content injection, recommendation and advertising.

Here is a list of commonly implemented optimization techniques in wireless networks.
  •  TCP optimization
    • Buffer bloat management
    • Round trip time management
  • Web optimization
    • GZIP
    •  JPEG / PNG… transcoding
    • Server-side JavaScript
    • White space / comments… removal
  • Lossless optimization
    • Throttling / pacing
    • Caching
    • Adaptive bit rate manipulation
    • Manifest mediation
    • Rate capping
  • Lossy optimization
    • Frame rate reduction
    • Transcoding
      • Online
      • Offline
      • Transrating
    • Contextual optimization
      • Dynamic bit rate adaptation
      • Device targeted optimization
      • Content targeted optimization
      • Rule base optimization
      • Policy driven optimization
      • Surgical optimization / Congestion avoidance
  • Congestion detection
    • TCP parameters based
    • RAN explicit indication
    • Probe based
    • Heuristics combination based
  • Encrypted traffic management
    • Encrypted traffic analytics
    • Throttling / pacing
    • Transparent proxy
    • Explicit proxy
  • QoE measurement
    • Web
      • page size
      • page load time (total)
      • page load time (first rendering)
    • Video
      • Temporal measurements
        • Time to start
        • Duration loading
        • Duration and number of buffering interruptions
        • Changes in adaptive bit rates
        • Quantization
        • Delivery MOS
      • Spatial measurements
        • Packet loss
        • Blockiness
        • Blurriness
        • PSNR / SSIM
        • Presentation MOS


An explanation of each technology and its feature set can be obtained as part of the mobile video monetization report series or individually as a feature report or in a workshop.

Tuesday, March 10, 2015

Mobile video 2015 executive summary

As is now traditional, I return from Mobile World Congress with a head full of ideas and views on market evolution, fueled by dozens of meetings and impromptu discussions. The 2015 mobile video monetization report, now in its fourth year, reflects the trends and my analysis of the mobile video market, its growth, opportunities and challenges.

Here is the executive summary from the report to be released this month.

2014 has been a contrasted year for deployments of video monetization platforms in mobile networks. The market in deployments and value has grown, but there has been an unease that has gripped some of its protagonists, forcing exits and pivot strategies, while players with new value proposition have emerged. This transition year is due to several factors.

On the growth front, we have seen the emergence of MVNOs and interconnect / clearing houses as a buying target, together with the natural turnover and replacement of now aging and fully amortized platforms deployed 5/6 years ago.

Additionally, the market leaders upgrade strategies have naturally also created some space for challengers and new entrants. Mature markets have seen mostly replacements and MVNO green field deployments, while emerging markets have added new units in markets that are either too early for 3G or already saturated in 4G. Volume growth has been particularly sustained in Eastern / Central Europe, North Africa, Middle East and South East Asia.

On the other hand, the emergence and growth of traffic encryption, coupled with persisting legal and regulatory threat surrounding the net neutrality debate has cooled down, delayed and in some cases shut down optimization projects as operators are trying to rethink their options. Western Europe and North America have seen a marked slowdown, while South America is just about starting to show interest.

The value of the deals has been in line with last year, after sharp erosions due to the competitive environment. The leading vendors have consolidated their approach, taken on new strategies and overall capitalizing on installed base, while many new deals have gone to new entrants and market challengers.

2014 has also been the first year of a commercial public cloud deployment, which should be followed soon by others. Network function virtualization has captivated many network operators’ imagination and science experiment budget, which has prompted the emergence of the notion of traffic classification and management as a service.

Video streaming, specifically, has shown great growth in 2014, consolidating its place as the fastest growing service in mobile networks and digital content altogether. 2014 and early 2015 have seen many acquisitions of video streaming, packaging, encoding technology company. What is new however, is that a good portion of these acquisitions were not performed by other technology companies but by OTT such as FaceBook and Twitter.

Mobile video advertising is starting to become a “thing” again, as investments, inventory and views show triple digit growth. The trend shows mobile video advertising becoming possibly the single largest revenue opportunity for mobile operators within a 5 years timeframe, but its implementation demands a change in attitude, organization, approach that is alien to most operators DNA. The transformation, akin to a heart transplant will probably leave many dead on the operating table before the graft takes on and the technique is refined, but they might not have much choice, looking at Google’ and Facebook’s announcements at Mobile World Congress 2015.

Will new technologies such as LTE Multicast, for instance, which are due to make their start in earnest this year, promising quality assured HD content, via streaming or download, be able to unlock the value chain? 


The mobile industry is embattled and find itself looking at some great threats to its business model, as the saying goes, those who will survive are not necessarily the strongest, but rather those who will adapt the fastest.

Tuesday, April 22, 2014

Video monetization & optimization 2014 executive summary

As announced earlier this month, my latest report "Mobile video monetization and optimization 2014" is out.

In 2014, mobile video is a fact of life. It has taken nearly 5 years for the service to transition from novelty to a growing habit that is quickly becoming an everyday occurrence in mature markets. Nearly a quarter of YouTube and Netflix views nowadays are on a tablet or a smartphone. Of course, users predominantly still stream over wifi, but as LTE slowly progresses across markets, users start to take for granted the network capacity to deliver video.

Already, LTE networks start to show signs of weariness as video threatens the infrastructure and the business model of mobile content delivery.

On the regulatory front, with the US appeal court served in January ruling that the FCC had no authority to impose "Open Internet Order" (net neutrality) rules to broadband carriers, there is a wind of hope and fear that blows across the traffic management market.

Almost concurrently, we are seeing initiatives from network operators and OTT alike to find new footings for business models and cooperation / competition.
  •       AT&T is experimenting with sponsored data plans,
  •       Verizon has bought a CDN,
  •       Deutsche Telekom partners with Evernote and Spotify,
  •       Orange persists investigating Telco OTT with Libon,
  •       Uninor India wants to charge for Facebook,
  •       Netflix is trialing tiered pricing,
  •       Facebook and Google are hinting at operating wireless networks…

In the meantime, mobile advertising still hasn't delivered on the promises of taking advantage of a hyper targeted, location-aware, contextually relevant service. Privacy concerns are at their highest, with the fires started by Wikileaks and Edward Snowden’ NSA scandals, fanned by “free internet” activists and a misinformed public.

Quality of Experience is a growing trend, from measurement to management and experience assurance is starting to make its appearance, buoyed by a series of vague announcements and launches in the analytics, big data, and network virtualization field.

Legacy (already?!) video optimization vendors see the emergence of smarter, more cost-effective and policy-driven platforms. The technology has not delivered fully on cost reduction, but is being implemented for media inspection, analytics, media policy enforcement and control and lately video centric pricing models and bundles.

With the acquisition of the market leader last year and the merger of the number 2 and 3 in market share at the beginning of this year, we have seen video optimization trials and RFx being delayed in their decision making.

Video optimization in 2014 is a mature market segment. The technology has been deployed in over 200 networks globally.


{Core Analysis} believe that video optimization will continue to be deployed in most networks as a media policy enforcement point and for media analytics.

Monday, January 20, 2014

All packets are not created equal: why DPI and policy vendors look at video encoding

As we are still contemplating the impact of last week's US ruling on net neutrality, I thought I would attempt today to settle a question I often get in my workshops. Why is DPI insufficient when it comes to video policy enforcement?

Deep packet inspection platforms have evolved from a static rules-based filtering engine to a sophisticated enforcement point allowing packet and protocol classification, prioritization and shaping. Ubiquitous in enterprises and telco networks, they are the jack-of-all-trade of traffic management, allowing such a diverse set of use cases as policy enforcement, adult content filtering, lawful interception, QoS management, peer-to-peer throttling or interdiction, etc...
DPIs rely first on a robust classification engine. It snoops through data traffic and classifies each packet based on port, protocol, interface, origin, destination, etc... The more sophisticated engines go beyond layer 3 and are able to recognize classes of traffic using headers. This classification engine is sufficient for most traffic type inspection, from web browsing to email, from VoIP to video conferencing or peer-to-peer sharing.
The premise, here is that if you can recognize, classify, tag traffic accurately, then you can apply rules governing the delivery of this traffic, ranging from interdiction to authorization, with many variants of shaping in between.

DPI falls short in many cases when it comes to video streaming. Until 2008 or so, most video streaming was relying on specialized protocols such as RTSP. The classification was easy, as the videos were all encapsulated in a specific protocol, allowing instantiation and enforcement of rules in pretty straightforward manner. The emergence and predominance of HTTP based streaming video (progressive download, adaptive streaming and variants) has complicated the task for DPIs. The transport protocol remains the same as general web traffic, but the behaviour is quite different. As we have seen many times in this blog, video traffic must be measured in different manner from generic data traffic, if policy enforcement is to be implemented. All packets are not created equal.


  • The first challenge is to recognise that a packet is video. DPIs generally infer the nature of the HTTP packet based on its origin/destination. For instance, they can see that the traffic's origin is YouTube, they can therefore assume that it is video. This is insufficient, not all YouTube traffic is video streaming (when you browse between pages, when you read or post comments, when you upload a video, when you like or dislike...). Applying video rules to browsing traffic or vice versa can have adverse consequences on the user experience.
  • The second challenge is policy enforcement. The main tool in DPI arsenal for traffic shaping is setting the delivery bit rate for a specific class of traffic. As we have seen, videos come in many definition (4k, HD, SD, QCIF...), many containers and many formats, resulting in a variety of different encoding bit rate. If you want to shape your video traffic, it is crucial that you know all these elements and the encoding bit rate, because if traffic is throttled below the encoding, rate, then the video stalls and buffers or times out. It is not reasonable to have a one-size-fits-all policy for video (unless it is to forbid usage). In order to extract the video-specific attributes of a session, you need to decode it, which requires in-line transcoding capabilities, even if you do not intend to modify that video.


Herein lies the difficulty. To implement intelligent, sophisticated traffic management rules today, you need to be able handle video. To handle video, you need to recognize it (not infer or assume), and measure it. To recognize and measure it, you need to decode it. This is one of the reasons why Allot bought Ortiva Wireless in 2012Procera partnered with Skyfire and ByteMobile upgraded their video inspection to full fledged DPI more recently. We will see more generic traffic management vendors (PCRF, PCEF, DPI...) partner and acquire video transcoding companies.

Tuesday, November 5, 2013

Introducing the Mobile Video Alliance

It was a great and unique chance to be invited at the inaugural meeting of the Mobile Video Alliance in London this week. I would like to thank and congratulate Matt Stagg from EE and Rory Murphy from Equinix, who did a great job of bringing together an amazing panel of participants from Akamai, Amazon, BBC, EE, BT, Lovefilm, Netflix, O2, Qualcomm, Sky, Three UK,Vodafone Global and others.

It was an even greater honor to be able to present my views on the future of mobile video and what the ecosystem should focus on to improve the consumer's user experience.

You can find my presentation and the accompanying video below.






In short, it is my first experience of executives from the whole value chain getting together to discuss strategy, business and technology improvements necessary to enhance the consumer's video quality of experience.
Subjects of discussion ranged widely from adaptive bit rate best practice, to transcoding, caching, roaming and data caps, measuring QoE, mobile advertising... in a refreshing neutral, non-competitive environment without vendors trying to push a specific agenda.

The mobile video alliance is a unique forum for the industry to come and solve issues that are plaguing its capacity to grow profitably. Stay tuned, I will follow and report its progress.

Monday, March 25, 2013

Video optimization 2013: Executive summary





Video accounts for over 50% of overall data traffic in mobile networks in 2013 and its compounded annual growth rate is projected at 75% over the next 5 years. Over 85% of the video traffic is generated by OTT properties and mobile network operators are struggling to accommodate the demand in a profitable fashion. New business models are starting to emerge, together with KPIs and technologies such as video optimization to manage, control and monetize OTT video traffic. This is the backdrop for this report



In September of 2012, Jens Schulte-Bockum, CEO Vodafone Germany shocked the industry in announcing that the 10% of their customer base who have elected to shift to their LTE network had a fundamentally different usage pattern than their 3G counterparts:
Voice, text, other messaging and data - everything that makes money for us - uses less than 15%. The bit that doesn’t make money uses 85% of the capacity. Clearly we are thinking about how we can monetise that. ”
“The bit that does not make money for us” is mobile OTT video.
The Bundesnetzagentur (BNetzA) Germany’s telecom regulator has mandated that the roll out of LTE be first in rural areas, before covering urban centres, thus ensuring a quasi 100% geographical coverage at launch. While many point out that the 85% of video transiting through the 4G network are a manifestation of cord cutting, it is not the exclusive use and remains a valid LTE use case.
2012 was the first year video was responsible for over half of global mobile data traffic. Over 85% of that video traffic is OTT, generating little revenue for mobile network operators.
As 4G deployments roll out across the globe, many network operators had envisioned that this additional capacity was sufficient to bridge the video traffic growth, allowing enough headroom for the creation and roll-out of new services. The exponential growth of video usage, encouraged by the increasing penetration of large screen devices, the introduction of higher definition content and the growth in adaptive streaming technology, is not likely to abate. It looks like by the time LTE has reached mass market penetration, many networks will find themselves still congested, with an unbalance cost / revenue structure due to the predominance of OTT video.
In reaction to this threat, many mobile network operators transitioned generous unlimited data plans to more granular charging methods, oftentimes implementing throttling and caps to reduce unprofitable traffic growth. These methods were implemented with various results but little success in monetizing OTT video traffic without alienating the consumer.
New technologies have made their debut, such as small cells, heterogeneous network management, traffic offload, edge caching, edge packaging, traffic shaping, cloud-based virtualized network functions… and new business models are starting to emerge, reinventing relationships between network operators, content providers, and device manufacturers.
Video optimization in 2013 is a mature market segment, deployed in over 150 networks globally, it has generated over $260m in 2012 and is projected to generate close to $390m in 2013. Video optimization was, in its first instance sold as a means to reduce video volume, thus potentially deferring investment costs for network build out. It was a wrong assumption, as most deployments in congested networks saw no reduction in volume and little deferment of investment. In most case, the technology allowed more users to occupy the network in congested areas. A new generation of products and vendors are starting to emerge, to manage the video experience in a more nimble, granular fashion.
{Core Analysis} believe that video optimization will continue to be deployed in most networks as a means to control and manage the video traffic. 

Monday, September 10, 2012

IBC roundup: product launches and announcements



As IBC 2012 is about to finish, here is a select list of announcements from vendors in the video encoding space, for those who have not been able to attend or follow al the news.

As you can see, there has been a strong launch platform by all main players, releasing new products, solutions and enhancements. The trend this year was about making multiscreen an economic reality (with lots of features around cost savings, manageability, scalability...), new codec HEVC and 4K TV as well as subjects I have recently brought forward such as edge based packaging and advertising making interesting inroads.


ATEME
ATEME launches new contribution encoder at IBC

Cisco
Cisco's ‘Videoscape Distribution Suite' Revolutionizes Video Content Delivery to Multiple Screens - The Network: Cisco's Technology News Site

Concurrent
Concurrent Showcases Multiscreen Media Intelligence platform at IBC 2012

Elemental Technologies

Elemental Demonstrates Next-Generation Media Processing at IBC Based on Amazon Web Services

Envivio
Envivio Introduces New On-Demand Transcoder That Significantly Enhances Efficiency and Video Quality
Envivio Enables TV Anytime on Any Screen with Enhancements to Halo Network Media Processor


Harmonic
Harmonic and Nagra Team to Power the World’s First Commercial MPEG-DASH OTT Multiscreen Service


RGBNetworks
RGB Networks Offers Complete Solution for Delivery of On-Demand TV Everywhere Services
RGB Networks Expands Multiscreen Delivery and Monetization Solution


SeaWell Networks
SeaWell Networks Announces First MPEG DASH-based Live and On Demand Video Ad-Insertion Solution at IBC 2012


Telestream


Wednesday, February 22, 2012

Flash in the cloud

Flash Networks announced today that it is making its Harmony Mobile Internet Services Gateway optimization and monetization solution available in the cloud. The solution that was traditionally deployed in mobile core networks will soon be deployed in private and public clouds.

"Harmony Mobile Internet Services Gateway integrates web and video optimization, analytics, traffic management, web monetization, content control, cell-based congestion awareness, centralized caching, service orchestration, and an intelligent policy engine in a single gateway. "

I spoke today with Merav Bahat, VP Marketing and Business Development at Flash Networks and she adds: "We wanted to introduce the capability for our customers to use cloud services and cloud computing with our platform. Harmony will continue to be deployed in the core networks and in conjunction, can be deployed in private and public clouds. We have been able to duplicate several functions from our platform such as caching, storage and CPU-intensive transcoding and put them in the cloud to offer great additional savings , higher hit rates and enhanced quality of experience".`

As seen here and here, Flash Networks is the third company in the video optimization space who has announced plans to offer a cloud-based solution. Caching, transcoding, content recommendation are some of the services that Flash Networks will perform in the cloud, to benefit carriers with multi-sites or multi-networks footprint.

Cloud-based video optimization is gaining traction, as more and more mobile network operators  see the necessity to deploy video optimization (over 80 have selected vendors to date) but balk at the CAPEX and footprint necessary to enable a good quality of experience.

Cloud deployments and cloud computing were, until recently, seen as an improbable technology to deploy real time video encoding services, but a few tier one operators have tested and are deploying the technology as we speak. It seems that the technology is reaching market validation stage and is getting a much larger acceptance from the carriers' community. It is a good move from Flash Networks to capitalize on this market trend and expand their offering in that space.

Tuesday, February 21, 2012

Starhub selects Mobixell

Mobixell Networks announced today that it has been selected by Singapore's Starhub. Mobixell will deploy its Seamless Access gateway to perform intelligent traffic management, advertising insertion and video optimization.


Liong Hang Chew, Assistant Vice President of Mobile Network Engineering at StarHub said, “We chose Mobixell Seamless Access to enable a new era of mobile data traffic handling, increasing efficiency and improving customer experience. At the same time, implementing Seamless Access will enable future services such as content security and other possible revenue-generating features."


The deal was won almost a year ago, in the summer of 2010.

Monday, February 20, 2012

Mobile video QOE part I: Subjective measurement


As video traffic continues to flood many wireless networks, over 80 mobile network operators have turned towards video optimization as a means to reduce the costs associated with growing their capacity for video traffic.
In many cases, the trials and deployments I have been involved in, have shown many carriers at a loss when it comes to comparing one vendor or technology against another. Lately, a few specialized vendors have been offering video QoE (Quality of Experience) tools to measure the quality of the video transmitted over wireless networks. In some cases, the video optimization vendors themselves have as well started to package some measurement with their tool to illustrate the quality of their encoding.
In the next few posts,and in more details, in my report "Video Optimization 2012" I examine the challenges and benefits of measuring  the video QoE in wireless networks, together with the most popular methods and their limitations.
Video QoE subjective measurement
Video quality is a very subjective matter. There is a whole body of science dedicated to provide an objective measure for a subjective quality. The attempt, here, is to rationalize the differences in quality between two videos via a mathematical measurement. It is called objective measurements and will be addressed in my next posts. Subjective measurement on the other hand, is a more reliable means to determine a video’s quality. It is also the most expensive and the most time-consuming technique if performed properly. 
For video optimization, a subjective measurement usually necessitates a focus group who is going to be shown several versions of a video, at different quality (read encoding). The individual opinion of the viewer is recorded in a templatized feedback form and averaged. For this method to work, all users need to see the same videos, in the same sequence, with the same conditions. It means that if the videos are to be streamed on a wireless network, it should be over a controlled environment, so that the same level of QoS is served for the same videos. You can then vary the protocol by having users comparing the original video with a modified version, both played at the same time, on the same device, for instance.
The averaged opinion, the Mean Opinion Score, of each video is then used to rank the different versions. In the case of video optimization, we can imagine an original video encoded at 2Mbps, then 4 versions provided by each vendor at 1Mbps, 750kbps and 500kbps and 250kbps. Each of the subject in the focus group will rank each version from each vendor from 1 to 5, for instance.
The environment must be strictly controlled for the results to be meaningful. The variables must be the same for each vendor, e.g. all performing transcoding in real time or all offline, same network conditions, for all the playback / streams and of course, same devices and same group of users.
You can easily understand that this method can be time consuming and costly, as network equipment and lab time must be reserved, network QoS must be controlled, focus group must be available for the duration, etc...
In that example, the carrier would have each corresponding version from each vendor compared in parallel for the computation of the MOS.  The result could be something like this:
The size of the sample (the number of users in the focus group) and how controlled the environment is, can dramatically affect the result, and it is not rare that you find aberrational results, as in the example above where vendor "a" sees its result increase from version 2 to 3.
If correctly executed, this test can track the relative quality of each vendor at different level of optimization. In this case, you can see that vendor "a" has a high level of perceived quality at medium-high bit rates but performs poorly at lower bit rates. Vendor "b" shows little degradation as the encoding decreases, vendors "c" and "d" show near-linear degradation inversely proportional to the encoding.
In every case, the test must be performed in a controlled environment to be valid. Results will vary sometimes greatly from one vendor to an other, and sometimes with the same vendor at different bit rate, so an expert in video is necessary to create the testing protocol, evaluate the vendors' setup, analyse the results and interpret the scores. As you can see, this is not an easy task and rare are the carriers who have successfully performed subjective analysis with meaningful results for vendor evaluation. This is why, by and large, vendors and carriers have started to look at automatized tools to evaluate existing video quality in a given network,  to compare different vendors and technologies and to measure ongoing perceived quality degradation due to network congestion or destructive video optimization. This will be subject of my next posts.

Thursday, January 26, 2012

Intel gets Real: Intel Buys $120m Codec Patents From RealNetworks



"RealNetworks, Inc. (Nasdaq: RNWK) today announced that it has signed an agreement to sell a significant number of its patents and its next generation video codecs software to Intel Corporation for a purchase price of $120 millions. Under terms of the sale, RealNetworks retains certain rights to continue to use the patents in current and future products.

"Selling these patents to Intel unlocks some of the substantial and unrealized value of RealNetworks assets," said Thomas Nielsen, RealNetworks President and CEO. "It represents an extraordinary opportunity for us to generate additional capital to boost investments in new businesses and markets while still protecting our existing business.
"RealNetworks is pleased Intel has agreed to acquire our next generation video codec software and team," said Nielsen. "Intel has a strong reputation as a technology innovator, and we believe they are well positioned to build on the development work and investment we've made in this area."
"As the technology industry evolves towards an experience-centric model, users are demanding more media and graphics capabilities in their computing devices.  The acquisition of these foundational media patents, additional patents and video codec software expands Intel's diverse and extensive portfolio of intellectual property," said Renée James, Intel senior vice president and general manager of the Software and Services Group.  "We believe this agreement enhances our ability to continue to offer richer experiences and innovative solutions to end users across a wide spectrum of devices, including through Ultrabook devices, smartphones and digital media."
In addition to the sale of the patents and next-generation video codec software, RealNetworks and Intel signed a memorandum of understanding to collaborate on future support and development of the next-generation video codec software and related products.
"We look forward to working with Intel to support the development of the next-generation video codec software and to expanding our relationship into new products and markets," said Nielsen.
RealNetworks does not anticipate that the sale of the approximately 190 patents and 170 patent applications and next generation video codec software will have any material impact on its businesses. RealNetworks businesses include a wide variety of SaaS products and services provided to global carriers, RealPlayer, the Helix streaming media platform, GameHouse online and social games, SuperPass and other media products and services sold both directly to consumers and through partners."
Another strong message and movement in the video encoding space. Video intellectual property as we have seen here is becoming increasingly strategic

For or against Adaptive Bit Rate? part IV: Alternatives

As we have seen  here,  hereand  hereAdaptive Bit Rate (ABR) is a great technology for streaming video contents in lossy networks but it is handicapped by many challenges that are hindering its success and threatening its implementation in mobile networks.

Having spoken to many vendors in the space, here are two techniques that I have seen deployed to try and  emulate ABR benefits in mobile networks, while reducing dependencies on some of the obstacles mentioned.

DBRA (Dynamic Bit Rate Adaptation)

DBRA is a technique that relies on real-time transcoding or transrating to follow network variations. It is implemented in the core network, on a video optimization engine. When the video connection is initialized, a DBRA-capable network uses TCP feedback and metrics to understand whether the connection is improving or worsening. The platform cannot detect congestion in itself but deduces it from the state of the connection. jitter, packet loss ratio, TCP window, device buffer size and filling rate are all parameters that are fed into proprietary heuristic algorithms. These algorithms in turn instruct the encoder frame by frame, bit by bit to encode the video bit rate to the available delivery bit rate.



In the above diagram, you see a theoretically perfect implementation of DBRA, where the platform follows network variations and "sticks" to the up and downs of the transmission rate.
The difference between each implementation depends largely on how aggressive or lax the algorithm is in predicting network variations. Being overly aggressive leads to decreased user experience as the encoder decreases the encoding faster than the decrease in available bandwidth while a lax implementation results in equal or worse user experience if the platform does not reduce the encoding fast enough to deplete the buffer, resulting in buffering or interruption of the playback.

Theoretically, this is a superior implementation to adaptive streaming, as it does not rely on content providers to format, maintain streams and chunks that might not be fully optimized for all network conditions (wifi, 3G, EDGE, HSPA, LTE…) and devices. It also guarantees an "optimal" user experience, always providing the best encoding the network can deliver at any point in time.
On the flip side, the technique is CAPEX expensive as real time encoding is CPU intensive.

Vendors such as Mobixell, Ortiva and others are proponents of this implementation.


Network-controlled Adaptive Streaming:

Unlike in ABR, where the device selects the appropriate bandwidth based on network availability, some vendors perform online transcoding to simulate an adaptive streaming scenario. The server feeds to the client a series of feeds whose quality vary throughout the connection and fakes the network feedback readout  to ensure a deterministic quality and size. The correct bitrate is computed from TCP connection status. More clearly, the network operator can decide at what bit rates a streaming connection should take place, spoofing the device by feeding it a manifest that does not correspond to the available delivery bit rate but to the bit rate selected by the carrier. 


This technique uses ABR as a Trojan horse. It relies on ABR for the delivery and flow control, but the device looses the capacity to detect network capacity, putting the carrier in control of the bandwidth it wants dedicated to the streaming operation.

These alternative implementations give the carrier more control over the streaming delivery on their networks. Conversely, handsets and content providers relinquish he capacity to control their user experience. The question is whether they really had control in the first place, as mobile networks are so congested that the resulting user experience is in most cases below expectations. In any case, I believe that a more meaningful coordination and collaboration between content providers, carriers and handset manufacturers is necessary to put the control of the user experience where it belongs: in the consumer's hands.

Wednesday, January 25, 2012

Skyfire welcomes Verizon with $8m series C financing

Skyfire labs announced today that it has raised $8m in a series C financing event with Verizon Ventures, Matrix Partners, Trinity Ventures and Lightspeed Ventures. Verizon is a new strategic investor in the company who has raised $31m to date.


Jeff Glueck, president and CEO of Skyfire commented: “Skyfire’s Rocket Optimizer product is delivering an average of 60 percent savings for operators on video bandwidth. We welcome the participation of Verizon, which is renowned for its network planning sophistication.”


"Rocket Optimizer 2.0, the latest iteration of Skyfire’s powerful carrier-grade network video and data optimization platform, was launched in October 2011. With mobile video demand expected to rise steeply over the next three years, Rocket 2.0 aims to help carriers solve capacity issues linked to the rapid rise of mobile video streaming. The solution offers real-time optimization of mobile video to enable smoother streaming, and can be applied to specific cell towers or backhaul regions as soon as congestion is detected. Rocket Optimizer 2.0 also offers the broadest support for video formats, including the world’s first instant MP4 optimization (which comprises more than 50 percent of today’s mobile video, including most HTML5 and iOS video). By leveraging cloud computing power, Skyfire’s solution is highly cost effective to scale on both 3G and 4G LTE networks".


The company is planning to use the proceed of this round to expand international sales, after bagging two tier 1 carriers in North America, it is ready to expand to Europe and Asia and has already started to increase their sales efforts and teams in these regions.


Skyfire is the first company to promote cloud-based computing to resolve the video tide that is threatening to engulf mobile networks. This market space is seeing a lot of strategic activity (here and here) these days. No doubt more to come as we near Mobile World Congress. 

Wednesday, January 11, 2012

For or against Adaptive Bit Rate? part III: Why isn't ABR more successful?

So why isn't ABR more successful? As we have seen here and here, there are many pros for the technology. It is a simple, efficient means to reduce the load on networks, while optimizing the quality of experience and reducing costs.

Lets review the problems experienced by ABR that hinder its penetration in the market.

1. Interoperability
Ostensibly, having three giants such as Apple, Adobe and Microsoft each pushing their version of the implementation leads to obvious issues. First, the implementations by the three vendors are not interoperable. That's one of the reason why your iPad wont play flash videos.Not only the encoding of the file is different (fMP4 vs. multiplexed), but the protocol (MPEG2TS vs. HTTP progressive download) and even the manifest are proprietary.This leads to a market fragmentation that forces content providers to choose their camp or implement all technologies, which drives up the cost of maintenance and operation proportionally.MPEG DASH, a new initiative aimed at rationalizing ABR use across the different platforms was just approved last month. The idea is that all HTTP based ABR technologies will converge towards a single format, protocol and manifest.

2. Economics
Apple, Adobe and Microsoft seek to control the content owner and production by enforcing their own formats and encoding. I don't see them converge for the sake of coopetition in the short term. A good example is Google's foray into WebM and its ambitions for YouTube.

4. Content owners' knowledge of mobile networks
Adaptive bit rate puts the onus on content owners to decide which flavour of the technology they want to implement, together with the range of quality they want to enable. In last week's example, we have seen how 1 file can translate into 18 versions and thousand of fragments to manage.Obviously, not every content provider is going to go the costly route of transcoding and managing 18 versions of the same content, particularly if this content is user-generated or free to air. This leaves the content provider with the difficult situation to select how many versions of the content and how many quality levels to be supported.
As we have seen over the last year, the market changes at a very rapid pace in term of which vendors are dominant in smartphone and tablets. It is a headache for a content provider to foresee which devices will access their content. This is compounded by the fact that most content providers have no idea of what the effective delivery bit rates can be for EDGE, UMTS, HSPA, HSPA +, LTE In this situation, the available encoding rate can be inappropriate for the delivery capacity.


In the example above, although the content is delivered through ABR, the content playback will be impacted as the delivery bit rate crosses the threshold of the lowest available encoding bit rate. This results in a bad user experience, ranging from buffering to interruption of the video playback.

5. Tablet and smartphone manufacturers knowledge of mobile networks
Obviously, delegating the selection of the quality of the content to the device is a smart move. Since the content is played on the device, this is where there is the clearest understanding of instantaneous network capacity or congestion. Unfortunately, certain handset vendors, particularly those coming from the consumer electronics world do not have enough experience in wireless IP for efficient video delivery. Some devices for instance will go and grab the highest capacity available on the network, irrespective of the encoding of the video requested. So, for instance if the capacity at connection is 1Mbps and the video is encoded at 500kbps, it will be downloaded at twice its rate. That is not a problem when the network is available, but as congestion creeps in, this behaviour snowballs and compounds congestion in embattled networks.

As we can see, there are  still many obstacles to overcome for ABR to be a successful mass market implementation. My next post will show what alternatives exist to ABR in mobile networks for efficient video delivery.

Friday, January 6, 2012

For or against Adaptive Bit Rate? part II: For ABR

As we have seen here, ABR presents some significant improvements on the way video can be delivered in lossy network conditions.
If we take the fragmented MP4 implementation, we can see that the benefits to a network and content provider are significant. The manifest, transmitted at the establishment of the connection between the player and the server describes the video file, its audio counterpart, its encoding and the different streams and bit rates available.

Since the player has access to all this at the establishment of the connection, it has all the data necessary for an informed decision on the best bit rate to select for the delivery of the video. This is important because ABR is the only technology today that gives the device the control over the selection of the version (and therefore quality and cost) of the video to be delivered.
This is crucial, since there is no efficient means today to convey congestion notification from the Radio Access Network through the Core and Backhaul to the content provider.

Video optimization technology is situated in the Core Network and relies on its reading of the state of the TCP connection (% packet loss, jitter, delay...) to deduce the health of the connection and the cell congestion. The problem, is that a degradation of the TCP connection can have many causes beyond payload congestion. The video optimization server can end up taking decisions to degrade or increase video quality based on insufficient observations or assumptions that might end up contributing to congestion rather than assuage it.

ABR, by providing the device with the capability to decide on the bit rate to be delivered, relies on the device's reading of the connection state, rather than an appliance in the core network. Since the video will be played on the device, this the place where the measurement of the connection state is most accurate.

As illustrated below, as the network conditions fluctuate throughout a connection, the device selects the bit rate that is the most appropriate for the stream, jumping between 300, 500 and 700kbps in this example, to follow network condition.

This provides an efficient means to provide the user with an optimal quality, as network conditions fluctuate, while reducing pressure on congested cells, when the connection degrades.

So, with only 4 to 6% of the traffic, why isn't ABR more widely used and why are network operators implementing video optimization solutions in the core network? Will ABR become the standard for delivering video in lossy networks? These questions and more will be answered in the next post.