Thursday, January 26, 2012

Intel gets Real: Intel Buys $120m Codec Patents From RealNetworks

"RealNetworks, Inc. (Nasdaq: RNWK) today announced that it has signed an agreement to sell a significant number of its patents and its next generation video codecs software to Intel Corporation for a purchase price of $120 millions. Under terms of the sale, RealNetworks retains certain rights to continue to use the patents in current and future products.

"Selling these patents to Intel unlocks some of the substantial and unrealized value of RealNetworks assets," said Thomas Nielsen, RealNetworks President and CEO. "It represents an extraordinary opportunity for us to generate additional capital to boost investments in new businesses and markets while still protecting our existing business.
"RealNetworks is pleased Intel has agreed to acquire our next generation video codec software and team," said Nielsen. "Intel has a strong reputation as a technology innovator, and we believe they are well positioned to build on the development work and investment we've made in this area."
"As the technology industry evolves towards an experience-centric model, users are demanding more media and graphics capabilities in their computing devices.  The acquisition of these foundational media patents, additional patents and video codec software expands Intel's diverse and extensive portfolio of intellectual property," said RenĂ©e James, Intel senior vice president and general manager of the Software and Services Group.  "We believe this agreement enhances our ability to continue to offer richer experiences and innovative solutions to end users across a wide spectrum of devices, including through Ultrabook devices, smartphones and digital media."
In addition to the sale of the patents and next-generation video codec software, RealNetworks and Intel signed a memorandum of understanding to collaborate on future support and development of the next-generation video codec software and related products.
"We look forward to working with Intel to support the development of the next-generation video codec software and to expanding our relationship into new products and markets," said Nielsen.
RealNetworks does not anticipate that the sale of the approximately 190 patents and 170 patent applications and next generation video codec software will have any material impact on its businesses. RealNetworks businesses include a wide variety of SaaS products and services provided to global carriers, RealPlayer, the Helix streaming media platform, GameHouse online and social games, SuperPass and other media products and services sold both directly to consumers and through partners."
Another strong message and movement in the video encoding space. Video intellectual property as we have seen here is becoming increasingly strategic

For or against Adaptive Bit Rate? part IV: Alternatives

As we have seen  here,  hereand  hereAdaptive Bit Rate (ABR) is a great technology for streaming video contents in lossy networks but it is handicapped by many challenges that are hindering its success and threatening its implementation in mobile networks.

Having spoken to many vendors in the space, here are two techniques that I have seen deployed to try and  emulate ABR benefits in mobile networks, while reducing dependencies on some of the obstacles mentioned.

DBRA (Dynamic Bit Rate Adaptation)

DBRA is a technique that relies on real-time transcoding or transrating to follow network variations. It is implemented in the core network, on a video optimization engine. When the video connection is initialized, a DBRA-capable network uses TCP feedback and metrics to understand whether the connection is improving or worsening. The platform cannot detect congestion in itself but deduces it from the state of the connection. jitter, packet loss ratio, TCP window, device buffer size and filling rate are all parameters that are fed into proprietary heuristic algorithms. These algorithms in turn instruct the encoder frame by frame, bit by bit to encode the video bit rate to the available delivery bit rate.

In the above diagram, you see a theoretically perfect implementation of DBRA, where the platform follows network variations and "sticks" to the up and downs of the transmission rate.
The difference between each implementation depends largely on how aggressive or lax the algorithm is in predicting network variations. Being overly aggressive leads to decreased user experience as the encoder decreases the encoding faster than the decrease in available bandwidth while a lax implementation results in equal or worse user experience if the platform does not reduce the encoding fast enough to deplete the buffer, resulting in buffering or interruption of the playback.

Theoretically, this is a superior implementation to adaptive streaming, as it does not rely on content providers to format, maintain streams and chunks that might not be fully optimized for all network conditions (wifi, 3G, EDGE, HSPA, LTE…) and devices. It also guarantees an "optimal" user experience, always providing the best encoding the network can deliver at any point in time.
On the flip side, the technique is CAPEX expensive as real time encoding is CPU intensive.

Vendors such as Mobixell, Ortiva and others are proponents of this implementation.

Network-controlled Adaptive Streaming:

Unlike in ABR, where the device selects the appropriate bandwidth based on network availability, some vendors perform online transcoding to simulate an adaptive streaming scenario. The server feeds to the client a series of feeds whose quality vary throughout the connection and fakes the network feedback readout  to ensure a deterministic quality and size. The correct bitrate is computed from TCP connection status. More clearly, the network operator can decide at what bit rates a streaming connection should take place, spoofing the device by feeding it a manifest that does not correspond to the available delivery bit rate but to the bit rate selected by the carrier. 

This technique uses ABR as a Trojan horse. It relies on ABR for the delivery and flow control, but the device looses the capacity to detect network capacity, putting the carrier in control of the bandwidth it wants dedicated to the streaming operation.

These alternative implementations give the carrier more control over the streaming delivery on their networks. Conversely, handsets and content providers relinquish he capacity to control their user experience. The question is whether they really had control in the first place, as mobile networks are so congested that the resulting user experience is in most cases below expectations. In any case, I believe that a more meaningful coordination and collaboration between content providers, carriers and handset manufacturers is necessary to put the control of the user experience where it belongs: in the consumer's hands.

Wednesday, January 25, 2012

Skyfire welcomes Verizon with $8m series C financing

Skyfire labs announced today that it has raised $8m in a series C financing event with Verizon Ventures, Matrix Partners, Trinity Ventures and Lightspeed Ventures. Verizon is a new strategic investor in the company who has raised $31m to date.

Jeff Glueck, president and CEO of Skyfire commented: “Skyfire’s Rocket Optimizer product is delivering an average of 60 percent savings for operators on video bandwidth. We welcome the participation of Verizon, which is renowned for its network planning sophistication.”

"Rocket Optimizer 2.0, the latest iteration of Skyfire’s powerful carrier-grade network video and data optimization platform, was launched in October 2011. With mobile video demand expected to rise steeply over the next three years, Rocket 2.0 aims to help carriers solve capacity issues linked to the rapid rise of mobile video streaming. The solution offers real-time optimization of mobile video to enable smoother streaming, and can be applied to specific cell towers or backhaul regions as soon as congestion is detected. Rocket Optimizer 2.0 also offers the broadest support for video formats, including the world’s first instant MP4 optimization (which comprises more than 50 percent of today’s mobile video, including most HTML5 and iOS video). By leveraging cloud computing power, Skyfire’s solution is highly cost effective to scale on both 3G and 4G LTE networks".

The company is planning to use the proceed of this round to expand international sales, after bagging two tier 1 carriers in North America, it is ready to expand to Europe and Asia and has already started to increase their sales efforts and teams in these regions.

Skyfire is the first company to promote cloud-based computing to resolve the video tide that is threatening to engulf mobile networks. This market space is seeing a lot of strategic activity (here and here) these days. No doubt more to come as we near Mobile World Congress. 

Wednesday, January 18, 2012

Bytemobile lay offs and reorganization

In an interview with Fierce Wireless today, Bytemobile confirms small layoffs to reorganize its teams around viodeo optimization.

"Bytemobile spokeswoman Stacey Infantino told FierceWireless that the company recently let go of a "very small percentage" of its staff but declined to comment on the number of employees affected by the reorganization. She said that the main reason for the changes was "to realign teams to better support the ongoing success of our video and web optimization solutions and the momentum of our adaptive traffic management platform. We are confident that this will result in better alignment with growth opportunities."

As the market readies for Mobile World Congress, we see some movement on the video optimization front. Bytemobile is realigning its cost structure, Openwave is throwing the towel, many partnership announcements are expected at the show next month. To earn the latest about the vendors' strategies and market share, contact me to receive the Video Optimization report 2012 or a personalized workshop.

Intel and Samsung partner for open OS in smartphones and connected devices

As you will remember, Intel had decided to leave the connected TV space to ARM back in October, after failing repetitively to gain any significant market share.Its Atom chips failed to convince and deliver a significantly better cost performance ratio to their prospective OEM and ODM.

Samsung told Informa telecoms that they are planning to merge their homegrown operating system BADA with Intel's opensource Tizen. The move will be gradual and will first affect handsets, with low end devices staying on BADA for a while and high end smartphones and tablets moving to Tizen as early as Q2 2012.

Smart TVs should follow shortly there after. 

An interesting move, that allows Samsung to free themselves from the cumbersome Google-Android relationship and to stay clear of the current patent war between OS / app / device vendors.
At the same time, it allows Intel to take a prominent place in one of the fastest growing segments in consumer electronics, connected devices, as we have seen here.

Thursday, January 12, 2012

Openwave for sale, Sandvine's buyback, Comverse's Spin-off

Openwave Systems Inc. (Nasdaq: OPWV): 
Openwave announced today that their board of directors has decided to "pursue strategic alternatives"  for the company's mediation and messaging products business.
While this is hardly a surprise, if you have followed the saga over the last year (here, here, here, here and here) it is till sad to see one of the great companies who shaped the mobile internet divest their assets. The company is not completely up for sale, only the product business is, while the board and management team are trying to monetize further their patent portfolio through licensing deals, such as the one with Microsoft, that brought $m18 last quarter.
For a full list of potential acquirer of Openwave assets, don't hesitate to contact me through linked in or my email, at the top right of this page.

Sandvine Corporation (TSX:SVC) (AIM:SAND):
On the heels of reporting their Q4 and fiscal 2011 results ( Q411:$20.6 million revenue GAAP net loss of 3.6 million (non-GAAP1 loss of $2.8 million); fiscal 2011 revenue $89.3 million and GAAP net loss 5.8 million (non-GAAP1: $2.2 million loss)) and 44 new customers, announced that its Board of Directors has approved the adoption of an open market stock buyback program for the purchase of up to approximately 12 million common shares ("Shares") over a one-year period.

 Comverse Technology(CMVT)
Comverse technology is the holding structure behind Comverse and Verint. It has announced that it will distribute the share of its wholly owned subsidiary Comverse to their shareholders on a prorata basis. The move is an effort to create a more tax efficient structure and unlock some of the value. The investors welcomed the news with disappointment as the were hoping for a full buyout through M&A.

Wednesday, January 11, 2012

For or against Adaptive Bit Rate? part III: Why isn't ABR more successful?

So why isn't ABR more successful? As we have seen here and here, there are many pros for the technology. It is a simple, efficient means to reduce the load on networks, while optimizing the quality of experience and reducing costs.

Lets review the problems experienced by ABR that hinder its penetration in the market.

1. Interoperability
Ostensibly, having three giants such as Apple, Adobe and Microsoft each pushing their version of the implementation leads to obvious issues. First, the implementations by the three vendors are not interoperable. That's one of the reason why your iPad wont play flash videos.Not only the encoding of the file is different (fMP4 vs. multiplexed), but the protocol (MPEG2TS vs. HTTP progressive download) and even the manifest are proprietary.This leads to a market fragmentation that forces content providers to choose their camp or implement all technologies, which drives up the cost of maintenance and operation proportionally.MPEG DASH, a new initiative aimed at rationalizing ABR use across the different platforms was just approved last month. The idea is that all HTTP based ABR technologies will converge towards a single format, protocol and manifest.

2. Economics
Apple, Adobe and Microsoft seek to control the content owner and production by enforcing their own formats and encoding. I don't see them converge for the sake of coopetition in the short term. A good example is Google's foray into WebM and its ambitions for YouTube.

4. Content owners' knowledge of mobile networks
Adaptive bit rate puts the onus on content owners to decide which flavour of the technology they want to implement, together with the range of quality they want to enable. In last week's example, we have seen how 1 file can translate into 18 versions and thousand of fragments to manage.Obviously, not every content provider is going to go the costly route of transcoding and managing 18 versions of the same content, particularly if this content is user-generated or free to air. This leaves the content provider with the difficult situation to select how many versions of the content and how many quality levels to be supported.
As we have seen over the last year, the market changes at a very rapid pace in term of which vendors are dominant in smartphone and tablets. It is a headache for a content provider to foresee which devices will access their content. This is compounded by the fact that most content providers have no idea of what the effective delivery bit rates can be for EDGE, UMTS, HSPA, HSPA +, LTE In this situation, the available encoding rate can be inappropriate for the delivery capacity.

In the example above, although the content is delivered through ABR, the content playback will be impacted as the delivery bit rate crosses the threshold of the lowest available encoding bit rate. This results in a bad user experience, ranging from buffering to interruption of the video playback.

5. Tablet and smartphone manufacturers knowledge of mobile networks
Obviously, delegating the selection of the quality of the content to the device is a smart move. Since the content is played on the device, this is where there is the clearest understanding of instantaneous network capacity or congestion. Unfortunately, certain handset vendors, particularly those coming from the consumer electronics world do not have enough experience in wireless IP for efficient video delivery. Some devices for instance will go and grab the highest capacity available on the network, irrespective of the encoding of the video requested. So, for instance if the capacity at connection is 1Mbps and the video is encoded at 500kbps, it will be downloaded at twice its rate. That is not a problem when the network is available, but as congestion creeps in, this behaviour snowballs and compounds congestion in embattled networks.

As we can see, there are  still many obstacles to overcome for ABR to be a successful mass market implementation. My next post will show what alternatives exist to ABR in mobile networks for efficient video delivery.

Friday, January 6, 2012

For or against Adaptive Bit Rate? part II: For ABR

As we have seen here, ABR presents some significant improvements on the way video can be delivered in lossy network conditions.
If we take the fragmented MP4 implementation, we can see that the benefits to a network and content provider are significant. The manifest, transmitted at the establishment of the connection between the player and the server describes the video file, its audio counterpart, its encoding and the different streams and bit rates available.

Since the player has access to all this at the establishment of the connection, it has all the data necessary for an informed decision on the best bit rate to select for the delivery of the video. This is important because ABR is the only technology today that gives the device the control over the selection of the version (and therefore quality and cost) of the video to be delivered.
This is crucial, since there is no efficient means today to convey congestion notification from the Radio Access Network through the Core and Backhaul to the content provider.

Video optimization technology is situated in the Core Network and relies on its reading of the state of the TCP connection (% packet loss, jitter, delay...) to deduce the health of the connection and the cell congestion. The problem, is that a degradation of the TCP connection can have many causes beyond payload congestion. The video optimization server can end up taking decisions to degrade or increase video quality based on insufficient observations or assumptions that might end up contributing to congestion rather than assuage it.

ABR, by providing the device with the capability to decide on the bit rate to be delivered, relies on the device's reading of the connection state, rather than an appliance in the core network. Since the video will be played on the device, this the place where the measurement of the connection state is most accurate.

As illustrated below, as the network conditions fluctuate throughout a connection, the device selects the bit rate that is the most appropriate for the stream, jumping between 300, 500 and 700kbps in this example, to follow network condition.

This provides an efficient means to provide the user with an optimal quality, as network conditions fluctuate, while reducing pressure on congested cells, when the connection degrades.

So, with only 4 to 6% of the traffic, why isn't ABR more widely used and why are network operators implementing video optimization solutions in the core network? Will ABR become the standard for delivering video in lossy networks? These questions and more will be answered in the next post.

Tuesday, January 3, 2012

For or against Adaptive Bit Rate? part I: what is ABR?

Adaptive Bit Rate streaming (ABR) was invented to enable content providers to provide video streaming services in environment in which bandwidth would fluctuate. The benefit is clear, as a connection capacity changes over time, the video carried over that connection can vary its bit rate, and therefore its size to adapt to the network conditions.The player or client and the server exchange discrete information on the control plane throughout the transmission, whereby the server exposes the available bit rates for the video being streamed and the client selects the appropriate version, based on its reading of the current connection condition.

The technology is fundamental to help accommodate the growth of online video delivery over unmanaged (OTT) and wireless networks.
The implementation is as follow: a video file is encoded into different streams, at different bit rates. The player can "jump" from one stream to the other, as the condition of the transmission degrades or improves. A manifest document is exchanged between the server and the player at the establishment of the connection for the player to understand the list of versions and bit rates available for delivery.

Unfortunately, the main content delivery technology vendors then started to diverge from the standard implementation to differentiate and control better the user experience and the content provider community. We have reviewed some of these vendor strategies here. Below are the main implementations:

  • Apple HTTP Adaptive (Live) streaming (HLS) for iPhone and iPad: This version is implemented over HTTP and MPEG2 TS. It uses a proprietary manifest called m3u8. Apple creates different versions of the same streams (2 to 6, usually) and  breaks down the stream into little “chunks” to facilitate the client jumping from one stream to the other. This results in thousands of chunks for each stream, identified through timecode.Unfortunately, the content provider has to deal with the pain of managing thousands of fragments for each video stream. A costly implementation.
  • Microsoft IIS Smooth Streaming (Silverlight Windows phone 7): Microsoft has implemented fragmented MP4 (fMP4), to enable a stream to be separated in discrete fragments, again, to allow the player to jump from one fragment to the other as conditions change.  Microsoft uses AAC for audio and AVC/H264 for video compression. The implementation allows to group each video and audio stream, with all its fragments in a single file,  providing a more cost effective solution than Apple's.
  • Adobe HTTP Dynamic Streaming (HDS) for Flash: Adobe uses a proprietary format called F4F to allow delivery of flash videos over RTMP and HTTP. The Flash Media Server creates multiple streams, at different bit rate but also different quality levels.  Streams are full lengths (duration of video).

None of the implementations above are inter-operable, from a manifest or from a file perspective, which means that a content provider with one 1080p HD video could see himself creating one version for each player, multiplied by the number of streams to accommodate the bandwidth variation, multiplied by the number of segments, chunks or file for each version... As illustrated above, a simple video can result in 18 versions and thousand of fragments to manage. This is the reason why only 4 to 6% of current videos are transmitted using ABR. The rest of the traffic uses good old progressive download, with no capacity to adapt to changes in bandwidth, which explains in turn why wireless network operators (over 60 of them) have elected to implement video optimization systems in their networks. We will look, in my next posts, at the pros and cons of ABR and the complementary and competing technologies to achieve the same goals.

Find part II of this post here.