Showing posts with label Apple. Show all posts
Showing posts with label Apple. Show all posts

Monday, October 27, 2014

HTTP 2.0, SPDY, encryption and wireless networks

I had mused, three and half years ago, at the start of this blog, that content providers might decide to encrypt and tunnel traffic in the future in order to retain control of the user experience.

It is amazing that wireless browsing is becoming increasingly the medium of choice for access to the internet, but the technology it relies on is still designed for fixed, high capacity, lossless, low latency networks. One would think that one would design a technology for its primary (and most challenging) use case and adapt it for more generous conditions instead of the other way around... but I am ranting again.

We are now definitely seeing this prediction accelerate since Google introduced SPDY and proposed it as default for HTTP 2.0.
While HTTP 2.0 latest draft is due to be completed this month, many players in the industry are silently but definitely committing resources to the battle.

SPDY, in its current version does not enhance and in many cases, decreases user experience in wireless networks. Its implementation of TCP lets it too dependant on round trip time, which in turns creates race conditions in lossy networks. SPDY can actually contribute to congestion rather than reduce it in wireless networks.

On one side content providers are using net neutrality arguments to further their case for the need for encryption. They are conflating security (NSA leaks...), privacy (apple cloud leaks) and net neutrality (equal, and if possible free access to networks) concerns.

On the other side, network operators, vendors are trying to argue that net neutrality does not mean not intervening, that the good of the overall users is subverted when some content providers and browser/client vendors use aggressive and predatory tactics to monopolize bandwidth in the name of QoE.

At this point, things are still fairly fluid. Google is proposing that most / all traffic be encrypted by default, while network operators are trying to introduce the concept of trusted proxies that can decrypt / encrypt under certain conditions and user's ascent.

Both these attempts are short-sighted and doomed to fail in my mind and are the result of aggressive strategies to establish market dominance.

In a perfect world, the device, network and content provider negotiate service quality based on device capabilities, subscriber data plan, network capacity and content quality. Technologies such as adaptive bit rate could have been tremendously efficient here, but the operating word in the previous sentence is "negotiate", which assumes collaboration, discovery and access to relevant information to take decisions.

 In the current state of affair, adaptive bit rate is often times corrupted in order to seize as much network bandwidth as possible, which results in devices and service providers aggressively competing for bits and bytes.
Network operators tend to either try to improve or control user experience by deploying DPI, transparent caches, pacing technology, traffic shaping engines, video transcoding, etc...

Content providers assume that highest quality of content (HD for video for instance) equals maximum experience for subscriber and therefore try and capture as much network resource as possible to deliver it. Browser / apps / phone manufacturers also assume that more speed equals better user experience, therefore try to commandeer as much capacity as possible. The flaw here is the assumption that the optimum is the product of many maxima self regulated by an equal and fair apportioning of resources. This shows a complete ignorance of how networks are designed, how they operate and how traffic flows through these networks.

This behaviour leads to a network where all resources are perpetually in contention and all end-points vie for priority and maximum resource allocation. From this perspective one can understand that there is no such thing as "net neutrality" at least not in wireless networks. When network resources are over-subscribed, decisions are taken as to who gets more capacity, priority, speed... The question becomes who should be in position to make these decisions. Right now, the laissez-faire approach to net neutrality means that the network is not managed, it is subjected to traffic. When in contention, resources are managing traffic based on obscure rules in load balancers, routers, base stations, traffic management engines... This approach is the result of lazy, surface thinking. Net neutrality should be the opposite of non intervention. Its rules should be applied equally to networks, devices / apps/browsers and content providers if what we want to enable is fair and equal access to resources.

Now, who said access to wireless should be fair and equal? Unless the networks are nationalized and become government assets, I do not see why private companies, in a competitive market couldn't manage their resources in order to optimize their utilization.

If we transport ourselves in a world where all traffic becomes encrypted overnight, networks lose the ability to manage traffic beyond allowing / stopping and fixing high level QoS metrics to specific services. That would lead to network operators being forced to charge exclusively for traffic. At this point, everyone has to pay per byte transmitted. The cost to users would become prohibitive as more and more video of higher resolution flow through the networks. It would mean also that these video providers could asphyxiate the other services... More importantly, it would mean that the user experience would become the fruit of the fight between content providers; ability to monopolize network capacity, which would go again any "net neutrality" principle. A couple of content providers could dominate not only service but the access to these service as well.

The best rationale against this scenario is commercial. Advertising is the only common business model that supports pay TV and many web services today. The only way to have an efficient, high CPM ad model in wireless is to make it relevant and contextual. The only that is going to happen is if the advertising is injected as close to the user as possible. That means collaboration. Network operators cannot provide subscriber data to third party, so they have to exploit and anonymize it themselves. Which means encryption, if needed must occur after ad insertion, which need to occur at the network edge.

The most optimally  commercially efficient model for all parties involved is through collaboration and advertising, but current battle plans show adversarial models, where obfuscation and manipulation are used to reduce opponents margin of maneuver. Complete analysis and scenario in my video monetization report here.

Monday, October 20, 2014

Report from SDN / NFV shows part I

Wow! last week was a busy week for everything SDN / NFV, particularly in wireless. My in-depth analysis of the segment is captured in my report. Here are a few thoughts on the last news.

First, as is now almost traditional, a third white paper was released by network operators on Network Functions Virtualizations. Notably, the original group of 13 who co-wrote the first manifesto that spurred the creation of ETSI ISG NFV has now grown to 30. The Industry Specification Group now counts 235 companies (including yours truly) and has seen 25 Proof of Concepts initiated. In short the white paper announces another 2 year term of effort beyond the initial timeframe. This new phase will focus on multi-vendor orchestration operability, and integration with legacy OSS/BSS functions.

MANO (orchestration) remains a point of contention and many start to recognise the growing threat and opportunity the function represents. Some operators (like Telefonica) seem actually to have reached the same conclusions as I in this blog and are starting to look deeply into what implementing MANO means for the ecosystem.

I will go today a step further. I believe that MANO in NFV has the potential to evolve the same way as the app stores in wireless. It is probably an apt comparison. Both are used to safekeep, reference, inventory, manage the propagation and lifecycle of software instances.

In both cases, the referencing of the apps/VNF  is a manual process, with arbitrary rules that can lead to dominant position if not caught early. It would be relatively easy, in this nascent market to have an orchestrator integrate as many VNFs as possible, with some "extensions" to lock-in this segment like Apple and Google did with mobiles.

I know, "Open" is the new "Organic", but for me, there is a clear need to maybe create an open source MANO project, lets call it "OpenHand"?

You can view below a mash-up of the presentations I gave at the show last week and the SDN & NFV USA in Dallas the week before below.



More notes on these past few weeks soon. Stay tuned.

Thursday, May 15, 2014

NFV & SDN Part II: Clouds & Openstack






I just came back from the OpenStack Summit taking place in Atlanta this week. In my quest to understand better SDN, NFV and Cloud maturity for mobile networks and video delivery, it is an unavoidable step. As announced a couple of weeks ago, this is a new project for me and a new field of interest.
I will chronicle in this blog my progress (or lack thereof) and will use this tool to try and explain my understanding of the state of the technology and the market. 
I am not a scientist and am somewhat slow to grasp new concepts, so you will undoubtedly find much to correct here. I appreciate your gentle comments as I progress.

So... where do we start? Maybe a couple of definitions.
Clouds
What is (are) the cloud(s)? Clouds are environments where software resources can be virtualized and allocated dynamically to instantiate, grow and shut down services.
Public clouds are made available by corporations to consumers and businesses in a commercial fashion. They are usually designed to satisfy a single need (Storage, Computing, Database...). 
The most successful examples can be Amazon Web Services, Google Drive, Apple iCloud, or DropBox. Pricing models are usually per hour rental of computing or database unit or per month rental of storage capacity. We will not address public clouds in this blog.
Private clouds are usually geo-dispersed capabilities federated and instantiated as one logical network capacity for a single company. We will focus here on the implementation of cloud technology in wireless networks. Typical use cases are simple data storage or development or testing sandbox.
Cloud technology relies on Openstack to abstract compute, storage and networking functions into logical elements and to manage heterogeneous virtualized environments. OpenStack is the Operating System of the cloud and it allows to instantiate Infrastructure or platform-as-a-service (respectively IAAS and PAAS).

OpenStack
The OpenStack program is also an open source community started by NASA and Rackspace, now independent and self governed. It essentially functions as a collaborative development community aimed at defining and releasing OpenStack software packages. 
After attending presentations and briefings from Deutsche Telecom, Ericsson, Dell, RedHat, Juniper, Verizon, Intel… I have drawn some very preliminary thoughts I would like to share here:
OpenStack is in its 9th release (IceHouse) and wireless interest is glaringly lacking. It has been setup primarily as an enterprise initiative and while enterprise and telecoms IT share many needs, wireless regulations tend to be much more stringent. CALEA (law enforcement), Sarbanes Oxley (accounting, traceability) are but a few of the provisions that would preclude OpenStack to run today in a commercial telco private cloud.
As presented by Verizon, Deutsche Telekom and other telcos at the summit, the current state of OpenStack does not allow it to be deployed "out of the box" without development and operations teams to patch, adapt and stabilize the system for telco purposes. These patches and tweaks have a negative impact on performance, scalability and latency, because they have not been taken into account at the design phase. They are workarounds rather than fixes. Case studies were presented, ranging from CDN video caching in a wireless infrastructure to generic sandbox for storage and software testing. The results show the lack of maturity of the technology to enable telco-grade services.
There are many companies that are increasingly investing in OpenStack, still I feel that a separate or focused telco working group must be created in its midst if we want it to reach telco-grade applicability.
More importantly, and maybe concerning is my belief that the commercial implementation of the technology requires a corresponding change in organizational setup and behaviour. Migrating to cloud and OpenStack is traditionally associated with the supposed benefits of increasing service roll out, reducing time to market, capex and opex as specialized telco appliance "transcend" to the cloud and are virtualized on off-the-shelf hardware.
There is no free lunch out there. The technology is currently immature, but as it evolves, we start to see that all these abstraction layers are going to require some very specialized skills to deploy, operate and maintain. these skills are very rare right now. Witness HP, Canonical, Intel, Ericsson all advertising "we are hiring" on their booths and during their presentations / keynotes. I have the feeling that operators who want to implement these technologies in the future will simply not have the internal skill set or capacity to roll them out. The large Systems Integrators might end up being the only winners there, ultimately reaping the cost benefits of a virtualized networks, while selling network-as-a-service to their customers.
Network operators might end up trading one vendor lock-in for another, much more sticky if their services run on a third party cloud. (I don't believe, we can realistically talk about service migration from cloud to cloud and vendor to vendor when 2 hypervisors supposedly running standard interfaces can't really coexist today in the same service).

Wednesday, March 12, 2014

PayTV vs. OTT part VII: 6 OTT Strategies

Pay TV vs. OTT:


More developments will be presented at Monetizing OTT services - London - March 24/26

enter the discount code OTT_CORE here for a 20% discount

The internet is a perfect medium for content distribution. Storage, access, distribution is inexpensive, allowing the smallest content owners and producers to offer their wares with a small starting investment. For OTT vendors, this is both an opportunity and a threat. The long tail of the content usually find its audience through social media. Specialty content is at home on  the internet, thanks to the advances made in term of search and recommendation engine. The short tail content is pushed by advertising, rather than social interaction. The type of budget necessary to launch a new content can be staggering, as illustrated in the advertisement campaigns preceding new movies and video games. Content is king in OTT and there are a few strategies put in place by the different players in this segment to secure customers and revenue.

1.     Pay-per-view, rental, on-demand

Apple’s iTunes and Amazon on demand are perfect examples of OTT services. Without subscription, any consumer with a credit card can rent and stream content to almost any screen in minutes. Revenues are generated from the transaction. They are collected by the OTT player, which then apportion it to the studio / content owner and so on. It is the literal translation of the pay TV model on the internet. Here again, the control resides in the distribution. Apple and Amazon have been successful because they have an existing customer base that they had been able to convert. This captive audience is the equivalent of the MSO’s set top box.
Brands with a smaller footprint in term of device penetration have struggled to emulate this strategy. Sony’s “Video Unlimited”, available on its PlayStation and selected devices, has struggled to reach its audience, for instance.

2.    Subscription VOD

Inaugurated by Netflix, it has become the reference for OTT video. A monthly subscription allows consumers to watch as many shows as they want. Success in this model relies in both the depth and the range of the catalogue. Netflix had to have headline content to attract new users and enough of a long tail to keep them there. Most SVOD strategies are monthly subscription without commitment, so they traditionally experience high churn.

3.    Free to air

YouTube is the most successful OTT player with a free-to-air strategy. Acquired by Google in 2006, the web phenomenon attracts over one billion unique users each month [2]. Monetization of this strategy has been slow. Advertising is currently the main contributor, using Google ad platform, but YouTube has recently launched premium channels, allowing any channel with over 100,000 followers to go premium for as little at .99c per month. It is not yet apparent whether that strategy will be successful.
Adult content is the second largest OTT player in this category, monetizing premium content through subscription. A small percentage of their viewership base subscribes to premium and generates close to 4.9 billion dollars revenue globally.

4.    Securing content

If content is king, content rights are the crown’s jewel. Securing content that will attract and retain consumers is the principal occupation of OTT players. Studios and content producers now have new avenues for the distribution of their content, but as traditional Pay TV weakens in viewership, it still dwarfs OTT revenues. The most popular content can spur a viewership addiction synonymous with subscription and advertising revenue. It has become necessary for the likes of Netflix to secure access to content. In 2012, Netflix lost rights of diffusion of Starz, Encore and Sony catalogues over broken negotiations. Clearly, having your core value (content) submitted to third party control and threatened on a regular basis by the whims of negotiation is not a very good strategy for long term success. Increasingly, OTT players and channels have started acquiring and producing content exclusively in order to guarantee access, control and ultimately monetization of popular content.
HBO has, for instance, developed the series “Game of Throne”, which became an overnight critical and popular success, drawing fans to the network and becoming one of the most pirated series of 2012 [4].
Netflix has secured later a deal with Disney, valued at close to $300 million per year for Disney. This deal sees Netflix get exclusive access to Disney’s movies after their theatrical release. In 2013, Netflix doubles down and sign a follow on deal for exclusive Disney content “Agents of S.H.I.E.L.D”.

5.    Favoring binge watching

Consumers buying habit have changed durably, we have seen, but their viewing habits are also undergoing transformation. With the availability of whole back catalogue seasons of a series, binge watching has become a solid trend. Many viewers, when watching a streaming TV show are increasingly watching more than one show per seating. Detecting the trend early, Netflix strategy for the release of “House of cards” has been to release the full season at once, as opposed to a fixed schedule, favored by traditional TV. Netflix has since released a survey with Harris interactive showing that 61% of Netflix series viewers are binge watchers.

6.     Costs reduction

In the same vein as Verizon, Netflix has undertaken to control its delivery network. Unlike Verizon, it is not an acquisition but organic development that sees Netflix launch its own CDN called Open Connect in 2012. Recognizing that delivering massive amounts of video over the internet can be costly and unreliable at scale, major OTT players look at controlling the end to end user experience and leverage economy of scale from a dedicated network infrastructure. Common CDNs are perfect for general purpose internet content but their business model and quality start to be stretched to their limit when it comes to massive video delivery.

Wednesday, January 15, 2014

Net neutrality denied for US broadband

Tuesday this week, The appeals court of the DC district ruled that the FCC (US regulator) had no authority to impose "Open Internet Order" (net neutrality) rules to broadband carriers. The rationale is that broadband carriers, such as the plaintiff -  Verizon  - are to be considered on the same level as Google, Apple and Netflix and should not be subjected to net neutrality provisions.

Those among you who read this blog know my stance on net neutrality in mobile networks and as it pertains to video.  This ruling is the first that recognizes that carriers fixed and mobile, essentially are in the same market as internet content and service providers and that imposing net neutrality rules on the former would benefit the later in an anti-competitive manner.

Net neutrality does not mean do nothing and let the traffic sort itself out. It is inefficient and contrary to public interest. Because of video elasticity, not managing actively traffic causes overall congestion, which impacts user experience and raises costs.

Unfortunately, many regulators worldwide, in a short-sighted attempt to appear open and supportive of the "free internet" enact net neutrality edicts that cripple their economy and reduce consumer's choice and quality of experience.

Net neutrality is a concept that can be applied to a network with large to infinite capacity with little to no congestion. It is a simplistic Keynesian view of network dynamics. The actors in the internet content delivery chain are all trying to produce the best user experience for their customers. When it comes to video, they invariably equate quality to speed. As a result, content providers, video players, web browsers, phone manufacturers are all trying to extract and control the most speed for their applications, devices sites to guarantee a superior user experience.

The result is not a self-adjusting network that distributes resources efficiently based on supply and demand but an inefficient flow of traffic that is subject to race conditions and snowball effects when Netflix, Google / Youtube, Apple and others compete for network capacity for their devices / browsers /apps / web sites / content.

As the CES show wrapped up last week, ripe with 4K content and device announcements, one cannot help but think that traffic management, prioritization, sponsored data plans, are going to become the rule going forward. That is if the regulators accept the current ruling and adapt the concept of net neutrality to a marketplace where access providers are no longer in control and content and devices dictate usage and traffic.

Monday, January 21, 2013

The law of the hungriest: Net neutrality and video


I was reflecting recently on net neutrality and its impact on delivering video in wireless networks. Specifically, most people I have discussed this with, seem to think that net neutrality means doing nothing. No intervention from the network operator to prioritize, discriminate, throttle, reduce or suppress a type of traffic vs another, whether based on a per subscriber, location, device or service.

This strikes me as somewhat short sighted and not very cogent of how the industry operates. I wonder why net neutrality is to apply to mobile networks, but not to handset manufacturers, app providers or content providers for instance.

There has been several depictions of some handset vendors or app providers having implemented method that are harmful to networks either unwittingly or downright predatory. Some smartphone vendors, for instance implement proprietary variations of streaming protocols to grab as much capacity of the network as possible, irrespective of the encoding of the accessed video, to ensure a fast and smooth video delivery to their device...at the detriment of others. It is easy to design an app or a browser or a video service that would use as much of a network capacity as possible, irrespective of the actual need for the service to function normally, which would result for a better user experience for the person accessing the service / the app / using this device but a degraded quality of experience for everyone else.

Why is that not looked after by net neutrality regulatory committees? Why would the network provide unrestricted access to any app / device / video service and let them fight for capacity without control? Mobile networks become ruled then by the law of the hungriest and when it comes to video, it can quickly become a fight dominated by the most popular web sites, phone vendors or app providers... I think that net neutrality, if it has to happen in mobile networks must be managed and that the notion of fair access extends to all parties involved.

Monday, July 9, 2012

Edge based optimization part II: Edge packaging

As mentioned in my previous post, as video traffic increases across fixed and mobile networks, innovative companies try to find way to reduce the costs and inefficiencies of transporting large amounts of data across geographies.

One of these new techniques is called edge based packaging and relies on adaptive bit rate streaming. It is particularly well adapted for delivery of live and VOD content (not as much for user-generated content).
 As we have seen in the past, ABR has many pros and cons, which makes the technology useful in certain conditions. For fixed-line content delivery, ABR is useful to account for network variations and provides an optimum video viewing experience. One of the drawback is the cost of operation of ABR, when a video source must be encoded into 3 formats (Flash, Apple and Microsoft) and many target bit rates to accommodate network conditions.


Edge-based packaging allows a server situated in a CDN's PoP in the edge cache to perform manifest manipulation and bit rate encoding directly at the edge. The server accepts 1 file/stream as input and can generate a manifest, rewrap, transmux and protect before delivery. This method can generate great savings on several dimensions.

  1. Backhaul. The amount of payload necessary to transport video is drastically reduced, as only the highest quality stream / file travels between core and edge and the creation of the multiple formats and bit rates is performed at the PoP.
  2. Storage. Only 1 version of each file / stream needs to be stored centrally. New versions are generated on the fly, per device type when accessed at the edge.
  3. CPU. Encoding is now distributed and on-demand, reducing the need for large server farms to encode predictively many versions and formats.
Additionally, this method allows to monetize the video stream:
  1. Advertising insertion. Ad insertion can occur at the edge, on a per stream / subscriber / regional basis.
  2. Policy enforcement. The edge server can enforce and decide QoE/QoS class of services per subscriber group or per type of content / channel.

Edge based packaging provides all the benefits of broadcast with the flexibility of unicast. It actually transforms a broadcast experience in an individualized, customized, targeted unicast experience. It is the perfect tool  to optimize, control and monetize OTT traffic in fixed line networks.

Thursday, January 26, 2012

For or against Adaptive Bit Rate? part IV: Alternatives

As we have seen  here,  hereand  hereAdaptive Bit Rate (ABR) is a great technology for streaming video contents in lossy networks but it is handicapped by many challenges that are hindering its success and threatening its implementation in mobile networks.

Having spoken to many vendors in the space, here are two techniques that I have seen deployed to try and  emulate ABR benefits in mobile networks, while reducing dependencies on some of the obstacles mentioned.

DBRA (Dynamic Bit Rate Adaptation)

DBRA is a technique that relies on real-time transcoding or transrating to follow network variations. It is implemented in the core network, on a video optimization engine. When the video connection is initialized, a DBRA-capable network uses TCP feedback and metrics to understand whether the connection is improving or worsening. The platform cannot detect congestion in itself but deduces it from the state of the connection. jitter, packet loss ratio, TCP window, device buffer size and filling rate are all parameters that are fed into proprietary heuristic algorithms. These algorithms in turn instruct the encoder frame by frame, bit by bit to encode the video bit rate to the available delivery bit rate.



In the above diagram, you see a theoretically perfect implementation of DBRA, where the platform follows network variations and "sticks" to the up and downs of the transmission rate.
The difference between each implementation depends largely on how aggressive or lax the algorithm is in predicting network variations. Being overly aggressive leads to decreased user experience as the encoder decreases the encoding faster than the decrease in available bandwidth while a lax implementation results in equal or worse user experience if the platform does not reduce the encoding fast enough to deplete the buffer, resulting in buffering or interruption of the playback.

Theoretically, this is a superior implementation to adaptive streaming, as it does not rely on content providers to format, maintain streams and chunks that might not be fully optimized for all network conditions (wifi, 3G, EDGE, HSPA, LTE…) and devices. It also guarantees an "optimal" user experience, always providing the best encoding the network can deliver at any point in time.
On the flip side, the technique is CAPEX expensive as real time encoding is CPU intensive.

Vendors such as Mobixell, Ortiva and others are proponents of this implementation.


Network-controlled Adaptive Streaming:

Unlike in ABR, where the device selects the appropriate bandwidth based on network availability, some vendors perform online transcoding to simulate an adaptive streaming scenario. The server feeds to the client a series of feeds whose quality vary throughout the connection and fakes the network feedback readout  to ensure a deterministic quality and size. The correct bitrate is computed from TCP connection status. More clearly, the network operator can decide at what bit rates a streaming connection should take place, spoofing the device by feeding it a manifest that does not correspond to the available delivery bit rate but to the bit rate selected by the carrier. 


This technique uses ABR as a Trojan horse. It relies on ABR for the delivery and flow control, but the device looses the capacity to detect network capacity, putting the carrier in control of the bandwidth it wants dedicated to the streaming operation.

These alternative implementations give the carrier more control over the streaming delivery on their networks. Conversely, handsets and content providers relinquish he capacity to control their user experience. The question is whether they really had control in the first place, as mobile networks are so congested that the resulting user experience is in most cases below expectations. In any case, I believe that a more meaningful coordination and collaboration between content providers, carriers and handset manufacturers is necessary to put the control of the user experience where it belongs: in the consumer's hands.

Wednesday, January 11, 2012

For or against Adaptive Bit Rate? part III: Why isn't ABR more successful?

So why isn't ABR more successful? As we have seen here and here, there are many pros for the technology. It is a simple, efficient means to reduce the load on networks, while optimizing the quality of experience and reducing costs.

Lets review the problems experienced by ABR that hinder its penetration in the market.

1. Interoperability
Ostensibly, having three giants such as Apple, Adobe and Microsoft each pushing their version of the implementation leads to obvious issues. First, the implementations by the three vendors are not interoperable. That's one of the reason why your iPad wont play flash videos.Not only the encoding of the file is different (fMP4 vs. multiplexed), but the protocol (MPEG2TS vs. HTTP progressive download) and even the manifest are proprietary.This leads to a market fragmentation that forces content providers to choose their camp or implement all technologies, which drives up the cost of maintenance and operation proportionally.MPEG DASH, a new initiative aimed at rationalizing ABR use across the different platforms was just approved last month. The idea is that all HTTP based ABR technologies will converge towards a single format, protocol and manifest.

2. Economics
Apple, Adobe and Microsoft seek to control the content owner and production by enforcing their own formats and encoding. I don't see them converge for the sake of coopetition in the short term. A good example is Google's foray into WebM and its ambitions for YouTube.

4. Content owners' knowledge of mobile networks
Adaptive bit rate puts the onus on content owners to decide which flavour of the technology they want to implement, together with the range of quality they want to enable. In last week's example, we have seen how 1 file can translate into 18 versions and thousand of fragments to manage.Obviously, not every content provider is going to go the costly route of transcoding and managing 18 versions of the same content, particularly if this content is user-generated or free to air. This leaves the content provider with the difficult situation to select how many versions of the content and how many quality levels to be supported.
As we have seen over the last year, the market changes at a very rapid pace in term of which vendors are dominant in smartphone and tablets. It is a headache for a content provider to foresee which devices will access their content. This is compounded by the fact that most content providers have no idea of what the effective delivery bit rates can be for EDGE, UMTS, HSPA, HSPA +, LTE In this situation, the available encoding rate can be inappropriate for the delivery capacity.


In the example above, although the content is delivered through ABR, the content playback will be impacted as the delivery bit rate crosses the threshold of the lowest available encoding bit rate. This results in a bad user experience, ranging from buffering to interruption of the video playback.

5. Tablet and smartphone manufacturers knowledge of mobile networks
Obviously, delegating the selection of the quality of the content to the device is a smart move. Since the content is played on the device, this is where there is the clearest understanding of instantaneous network capacity or congestion. Unfortunately, certain handset vendors, particularly those coming from the consumer electronics world do not have enough experience in wireless IP for efficient video delivery. Some devices for instance will go and grab the highest capacity available on the network, irrespective of the encoding of the video requested. So, for instance if the capacity at connection is 1Mbps and the video is encoded at 500kbps, it will be downloaded at twice its rate. That is not a problem when the network is available, but as congestion creeps in, this behaviour snowballs and compounds congestion in embattled networks.

As we can see, there are  still many obstacles to overcome for ABR to be a successful mass market implementation. My next post will show what alternatives exist to ABR in mobile networks for efficient video delivery.

Friday, January 6, 2012

For or against Adaptive Bit Rate? part II: For ABR

As we have seen here, ABR presents some significant improvements on the way video can be delivered in lossy network conditions.
If we take the fragmented MP4 implementation, we can see that the benefits to a network and content provider are significant. The manifest, transmitted at the establishment of the connection between the player and the server describes the video file, its audio counterpart, its encoding and the different streams and bit rates available.

Since the player has access to all this at the establishment of the connection, it has all the data necessary for an informed decision on the best bit rate to select for the delivery of the video. This is important because ABR is the only technology today that gives the device the control over the selection of the version (and therefore quality and cost) of the video to be delivered.
This is crucial, since there is no efficient means today to convey congestion notification from the Radio Access Network through the Core and Backhaul to the content provider.

Video optimization technology is situated in the Core Network and relies on its reading of the state of the TCP connection (% packet loss, jitter, delay...) to deduce the health of the connection and the cell congestion. The problem, is that a degradation of the TCP connection can have many causes beyond payload congestion. The video optimization server can end up taking decisions to degrade or increase video quality based on insufficient observations or assumptions that might end up contributing to congestion rather than assuage it.

ABR, by providing the device with the capability to decide on the bit rate to be delivered, relies on the device's reading of the connection state, rather than an appliance in the core network. Since the video will be played on the device, this the place where the measurement of the connection state is most accurate.

As illustrated below, as the network conditions fluctuate throughout a connection, the device selects the bit rate that is the most appropriate for the stream, jumping between 300, 500 and 700kbps in this example, to follow network condition.

This provides an efficient means to provide the user with an optimal quality, as network conditions fluctuate, while reducing pressure on congested cells, when the connection degrades.

So, with only 4 to 6% of the traffic, why isn't ABR more widely used and why are network operators implementing video optimization solutions in the core network? Will ABR become the standard for delivering video in lossy networks? These questions and more will be answered in the next post.

Tuesday, January 3, 2012

For or against Adaptive Bit Rate? part I: what is ABR?

Adaptive Bit Rate streaming (ABR) was invented to enable content providers to provide video streaming services in environment in which bandwidth would fluctuate. The benefit is clear, as a connection capacity changes over time, the video carried over that connection can vary its bit rate, and therefore its size to adapt to the network conditions.The player or client and the server exchange discrete information on the control plane throughout the transmission, whereby the server exposes the available bit rates for the video being streamed and the client selects the appropriate version, based on its reading of the current connection condition.

The technology is fundamental to help accommodate the growth of online video delivery over unmanaged (OTT) and wireless networks.
The implementation is as follow: a video file is encoded into different streams, at different bit rates. The player can "jump" from one stream to the other, as the condition of the transmission degrades or improves. A manifest document is exchanged between the server and the player at the establishment of the connection for the player to understand the list of versions and bit rates available for delivery.

Unfortunately, the main content delivery technology vendors then started to diverge from the standard implementation to differentiate and control better the user experience and the content provider community. We have reviewed some of these vendor strategies here. Below are the main implementations:

  • Apple HTTP Adaptive (Live) streaming (HLS) for iPhone and iPad: This version is implemented over HTTP and MPEG2 TS. It uses a proprietary manifest called m3u8. Apple creates different versions of the same streams (2 to 6, usually) and  breaks down the stream into little “chunks” to facilitate the client jumping from one stream to the other. This results in thousands of chunks for each stream, identified through timecode.Unfortunately, the content provider has to deal with the pain of managing thousands of fragments for each video stream. A costly implementation.
  • Microsoft IIS Smooth Streaming (Silverlight Windows phone 7): Microsoft has implemented fragmented MP4 (fMP4), to enable a stream to be separated in discrete fragments, again, to allow the player to jump from one fragment to the other as conditions change.  Microsoft uses AAC for audio and AVC/H264 for video compression. The implementation allows to group each video and audio stream, with all its fragments in a single file,  providing a more cost effective solution than Apple's.
  • Adobe HTTP Dynamic Streaming (HDS) for Flash: Adobe uses a proprietary format called F4F to allow delivery of flash videos over RTMP and HTTP. The Flash Media Server creates multiple streams, at different bit rate but also different quality levels.  Streams are full lengths (duration of video).

None of the implementations above are inter-operable, from a manifest or from a file perspective, which means that a content provider with one 1080p HD video could see himself creating one version for each player, multiplied by the number of streams to accommodate the bandwidth variation, multiplied by the number of segments, chunks or file for each version... As illustrated above, a simple video can result in 18 versions and thousand of fragments to manage. This is the reason why only 4 to 6% of current videos are transmitted using ABR. The rest of the traffic uses good old progressive download, with no capacity to adapt to changes in bandwidth, which explains in turn why wireless network operators (over 60 of them) have elected to implement video optimization systems in their networks. We will look, in my next posts, at the pros and cons of ABR and the complementary and competing technologies to achieve the same goals.

Find part II of this post here.

Monday, December 5, 2011

Pay TV vs OTT part IV: clash of the titans

We have reviewed and discussed at length (here, here, and here) the fundamental changes that OTT is causing to the pay TV market. As consumer electronics vendors become content aggregators and as more screens get now directly connected to the internet, there is less and less value in a set top box that is an exclusive managed device from your MSO.

Service providers themselves are ambivalent about the box. It used to be the main tangible asset that MSOs marketed to "own" a subscriber relationship, with a safe environment allowing transactions, access control and digital rights management to monetize live and on demand programs.
Lately, it has looked increasingly like a ball and chain that MSOs have dragged, a costly installed base, slow to evolve and adapt to the latest technologies, incapable of competing against better services and cost structures evolved from OTT.

Microsoft, in the latest incarnation of its XBox Live service, has brokered deals with several dozens of content providers beyond existing Hulu and Netlix and  is launching today. More interestingly, Verizon FiOS, Comcast Xfinity and HBO are also part of the package... as OTT apps. The XBox is already a high-density, high-performance gaming and multimedia environment to play online games and stream video. Adding live TV and VOD makes sense and makes the set top box completely redundant. Microsoft innovates by integrating Bing, its search engine, the Kinect, its haptic motion recognition device and voice, with the EPG (Electronic Programming Guide) of the programmer. You can literally search y voice for a show, an actor, a director and see the results aggregated on your screen from various sources.

While you still have to be a Comcast or Verizon cable subscriber to avail of the services in the states, the writing (or rather the screen) is on the wall.

This experiment will no doubt cast a new light on the 35 million XBox live accounts, putting Microsoft firmly shoulder to shoulder with Google's TV efforts (and Motorola's set top boxes) and the next generation of Apple TV.


Soon will be a time when subscribers will buy access from their ISP independently from aggregation and content. Channels and MSOs will compete across new geographies on unmanaged devices, across unmanaged networks. New generation of apps will enable you to discover, access and curate content from your local media servers, the cloud and your traditional providers and present the result on the screen you elect. There is no technological or logistical barrier any longer. The business model of pay TV, subscription, advertising is undergoing changes of seismic proportions.

Tuesday, November 29, 2011

Need an IT manager for my connected home!

I am not really an early adopter. I tend to integrate new products and technologies when my needs change.
Until recently, my electronic devices were dumb and mute, just performing what I wanted to, either working or not.

In this new era of hyper connected homes though, everything becomes exponentially more complex as you add more connected devices. Since I have started my business, I had also to use cloud-based apps and services to expand my brick-and-mortar tools.
Now, with two desktops, a laptop, a tablet, two smartphones, a connected PVR, a PS3 and countless accounts and services from Dropbox, Youtube, Netflix, Google apps, Tweeter, Blogger... it does not take much to see how how these devices, interacting with all these apps and data points can quickly start conflicting with each other.
Especially when you layer that these devices communicate over LAN, Wifi, Bluetooth, RF, IR...
Add as well surveillance camera and energy management modules in the future and complex becomes complicated.

UPnP (Universal Plug and Play) and DLNA (Digital Living Network Alliance) usually do a good job of device discovery. Service and content discovery and priority setting is where it starts to get tricky.
Here are a few of the problems I had to face or am still facing in this hyper connected world.

Authentication and handover:
I use Rogers as a service provider for one of my smartphones. I use their self-help app to manage my bill, my subscription and travel packages. One of the things that is truly a problem is that it works only on a cellular network. Most of the times I need to use it is when I am travelling to add or remove a travel pack for voice, data or text. Because of the expensive roaming data rates, it does not make sense to connect while roaming just to enable a feature that saves me the roaming costs. Obviously, Rogers has not enabled Wifi - cellular authentication and credentials handover.

Authorization and software version control:
I am a Bell subscriber for TV and internet at home. I was excited when I received an email showing off Bell's new mobile TV and companion screen apps for my iPhone / iPad. I was less excited when my iPhone, on rogers network could not use Bell's content, even though I am a Bell customer. Too bad, but I thought at least I could use the PVR remote control with my iPad on Bell's network. Does not work either, because I would have to upgrade my PVR. A PVR, I am renting from Bell. You would think it would be possible for them to know what PVR I am using and therefore allow me to re flash the software to avail of new capabilities or try to up sell me to the latest new PVR and features...

Credentials management
At some point, security relents before complexity. When you want to run a secure network across several interfaces and devices, managing credentials with associated permissions becomes tricky. You have to find a way to have credentials that can easily be shared, remaining secure while managing what device has access to what dataset under which conditions.

Connectivity, content discovery  and sharing:
Inevitably, users buy new devices and add up capabilities along the way. The flip side of that coin, though is that it makes for a very heterogeneous environment. When you start having several devices with similar capabilities or overlaps, you want them to function with each other seamlessly. For instance, my old desktop running XP cannot easily join the workgroup of my new desktop and laptop running windows 7.
There are solutions, but none of them straightforward enough for a regular user. A last example is the fact that my laptop, my iPad, my iPhone, my PVR, my 2 desktops and my PS3 to some extend all act as media servers. They all have local content and they all access content from the cloud, the internet or local content stored in other devices. Again, I haven't found a solution yet that would allow me to manage and share content across devices with clear permission management. Additionally, there is no search or recommendation engine that would allow me to perform meta search across 1) my local content on several devices 2) the internet and OTT content providers and apps I am using 3) the electronic programming guide of my set top box and present me a choice like: do you want to watch boardwalk empire Sunday at 9 pm on HBO, now on HBO Go, buy the entire season on Amazon or play the episodes from my PVR or media servers.

Compatibility:
Too often, i have to transcode videos or change content format to ensure that I can see them on all my screens. This leads to multiple versions of the same content, with associated discoverability and version control issues. Another example is around contact management. It is incredible that Apple still does not get contact management right. If you enable iCloud and have your contacts synchronized with anything that is not apple (Google contacts or linked in) you end up with endless duplicates contacts with no hope to merge and delete without adding on new expensive apps.

Control and management:
It strikes me that with that many connected devices and apps, I have not found yet a single dashboard giving me visibility, control and management of all my devices, allowing me to allocate bandwidth, and permissions for sharing data and content across platforms.

I think at the end of the day, this field is still emerging and while it is possible to have a good implementation when purchasing a solution from scratch from a single vendor or service provider, assembling a solution organically as you add new devices is likely to have you spend hours deciphering DNS and DHCP configurations. I think what is needed in the short term is a gateway platform, acting as middle-ware, indexing and aggregating devices and content, providing a clear dashboard for permissions management and authorization. That gateway could be the set-top-box if it is powerful enough. It would give back to MSO the control they are loosing to OTT if they are willing to integrate and provide a cohesive environment.

Friday, November 4, 2011

Openwave licenses its patents to Microsoft

In a press release dated November 03, Openwave has announced its 1Q12 results and a licensing agreement with Microsoft in a separate release.

Openwave has had quite a momentous summer , between the unraveling of its relationship with Juniper, the re-purchase of its patent portfolio sold to Myriad group, and the replacement of its CEO (herehere and here). Never a dull moment. Having said that, the portfolio of patents that Openwave re-appropriated is a good representation of the pioneering position Openwave had in the industry in the 90's, with many seminal patents in mobile internet and mobile browsing. It was a good move to get them back, from a company valuation standpoint.


OPENWAVE SYSTEMS INC.
CONDENSED CONSOLIDATED STATEMENTS OF OPERATIONS-UNAUDITED
(In thousands, except per share data)
             
 
Three Months Ended
September 30,June 30,September 30,
 2011  2011  2010 
Revenues:
License$9,914$10,273$12,332
Maintenance and support10,67110,67713,993
Services16,79014,25511,203
Patents 15,021  10  4,000 
Total revenues 52,396  35,215  41,528 


While the company results for this quarter seem, at first glance, positive, they actually tell an interesting story. License revenues are still decreasing, (quarter to quarter and year to year), maintenance and support flat vs. last quarter and decreasing vs last year indicate an aging slowly decreasing customer base. The steady increase in services revenues, together with the decrease in license would tend to indicate that the company is milking customs development and change request opportunities, while customers are hesitating investing in the new product range.

Operating Expenses:
Research and development9,3489,83611,430
Sales and marketing8,73711,50910,821
General and administrative7,7867,1676,612
Restructuring and other related costs 5,072  524  708 
Total operating expenses 30,943  29,036  29,571

Cost of revenue is flat-ish, with the notable exception of cost of services, indicating further the custom development aspect. OPEX remains on par vs. last quarter, despite a $5m restructuring charge, mostly financed by lower marketing and sales expenses.

What stands out, is that Openwave manages to turn a $3m profit in this quarter. It is due to the licensing agreement with Microsoft, whereby they are licensing Openwave's entire portfolio of 200+ patents. Patents revenue for this quarter were $15m+.
It is a good operation for Openwave, to license its portfolio to a presumably non -competitor in the infrastructure space. It brings revenue, maybe some more significant strategic tie-ins in the long term. Unfortunately it does little to Openwave's capacity to recapture market share.
For Microsoft, it is an interesting move. I suspect many of the patents will be quite liquid, when it comes to the next round of litigation. Interested to see how Nokia/Microsoft/Openwave can take on Apple and Google for the mobile internet supremacy.