Following the brutal announcement of Bytemobile's product line discontinuation by Citrix, things are starting to get a little clearer in term of what the potential next steps could be for their customers.
Citrix was market leader in terms of number of deployments and revenue in the video optimization market when it decided to kill this product offering due to internal strategic realignment. The news left many customers confused as to what - if any- support they can expect from the company.
Citrix' first order of action over the last month has been to meet with every major account to reassure them that the transition will follow a plan. What transpires at this point in time is that a few features from ByteMobile T-3100 product family will be migrated to NetScaler probably towards the end of 2016. Citrix is still in the process of circling the wagons at this stage and seems to be trying to evaluate the business case for the transition, which will condition the amount of feature and the capacity to reach feature parity.
In many cases, network operators who have deployed versions of ByteMobile T-3100 have been put on notice to upgrade to the latest version, as older versions will see end of support notices going out next year.
Concurrently, presumably, Citrix won't be able to confirm NetScaler's detailed roadmap and transition plan until they have a better idea in term of the number and type of customers that will elect to migrate.
In the meantime, ByteMobile's historical competitors are drawing battle plans to take advantage of this opportunity. A forklift upgrade is never an easy task to negotiate and, no doubt, there will be much pencil sharpening in the new year in core networks procurement departments.
Video optimization market has dramatically changed over the last year. The growth in encrypted traffic, the uncertainty surrounding Citrix and the net neutrality debate has change the feature set operators have been looking for.
Real-time transcoding orders have severely reduced because of costs and encryption, while TCP optimization, encrypted traffic analytics, video advertising and adaptive bit rate management are gaining increasing favors.
The recent T-Mobile USA "Binge On" offering, providing managed video for premium services is also closely followed by many network operators and will in all likeliness create more interest for video management collaboration solutions.
As usual, this and more in my report on video monetization.
Monday, December 21, 2015
Friday, November 20, 2015
Citrix shuts down ByteMobile
Citrix has decided to "de-invest" in the ByteMobile product line that was initially reported for sale. Citrix provided this week an update in an investor's call, on the results of its strategic review that was announced in September.
Executives commented:
Citrix had acquired Bytemobile in 2012 for $435m as the company was leading the video optimization market segment.
The video optimization market has greatly suffered as a stand alone value proposition on the combined pressure from the growth of encrypted video traffic, and the uncertainty surrounding ByteMobile's future, the market segment leader in terms of installed base. The vendors in the space have bundled the technology into larger offerings ranging from policy enforcement, video analytics and video advertising and monetization. Last week, T-Mobile introduced its "Binge-on" video plan based on video optimization of adaptive bit rate traffic, and multiple vendors have been announcing support of encrypted video traffic management solutions.
Further review of the video optimization market size and projection, vendors and strategies available in workshop and report format.
Executives commented:
"The underlying premises for the acquisition of ByteMobile have now vanished.We acquired the company for its ability to optimize video traffic,but today a significant amount of the video traffic is encrypted and can no longer be optimized. [...] We will transition some of the capabilities in the NetScaler product but for the most part phasing that product line out."The company mentioned that ByteMobile revenue for 2015 were expected around $50m and breaking even. XenServer will also be discontinued (unsurprisingly looking at VMWare and KVM's relative success).
Citrix had acquired Bytemobile in 2012 for $435m as the company was leading the video optimization market segment.
The video optimization market has greatly suffered as a stand alone value proposition on the combined pressure from the growth of encrypted video traffic, and the uncertainty surrounding ByteMobile's future, the market segment leader in terms of installed base. The vendors in the space have bundled the technology into larger offerings ranging from policy enforcement, video analytics and video advertising and monetization. Last week, T-Mobile introduced its "Binge-on" video plan based on video optimization of adaptive bit rate traffic, and multiple vendors have been announcing support of encrypted video traffic management solutions.
Further review of the video optimization market size and projection, vendors and strategies available in workshop and report format.
Thursday, November 12, 2015
All you need to know about T-Mobile Binge On
Have you been wondering what is T-Mobile US doing with your video on Binge On?
Here is a small guide and analysis of the service, its technology, features and limitation.
T-Mobile announced at its Uncarrier X event on November 11 the launch of its new service Binge On. The company's CEO remarked that video is the fastest growing data service with +145% compared to 2 years ago and that consumers are increasingly watching video on mobile devices, in wireless networks and cutting the cord from their cable and satellite TV providers. Binge on was created to meet these two market trends.
I have been previewing many of the features launched with Binge on in my video monetization report and my blog posts (here and here on encryption and collaboration) over the last 4 years.
Binge On allows any new or existing subscribers with a 3GB data plan or higher to stream for free videos from a number of apps and OTT properties. Let's examine what the offer entails:
- Subscribers with 3GB data plans and higher are automatically opted in. They can opt out at any moment and opt back in when they want. This is a simple mechanism that allows service transparency, but more importantly underpins the claim of Net Neutral service. I have pointed out for a long time that services can be managed (prioritized, throttled, barred...) as long as subscribers opt in for these. Video optimization falls squarely in that category and T-Mobile certainly heeded my advice in that area. More on this later.
- Services streaming free in Binge on are: Crackle, DirecTV, Encore, ESPN, Fox Sports, Fox Sports GO, Go90, HBO GO, HBO NOW, Hulu, Major League Baseball, Movieplex, NBC Sports, Netflix, Showtime, Sling Box, Sling TV, Starz, T-Mobile TV, Univision Deportes, Ustream, Vessel, Vevo, VUDU.
- You still have to register / subscribe to the individual services to be able to stream free on T-Mo network.
- Interestingly, no Google properties (YouTube) or Facebook included yet. Discussions are apparently ongoing.
- These OTT video services maintain their encryption, so the content and consumer interactions are safe.
- There were mentions of a mysterious "T-Mobile proprietary streaming technology and video optimization" that requires video service providers to integrate with T-Mobile. This is not transcoding and relies on adaptive bit rate optimization, ranging from throttling data to transrating, to manifest manipulation (ask video providers to enable un-encrypted manifest so that it can be edited and limited to 480p definition).
- Yep, video is limited at 480p definition, which T-Mobile defines as DVD quality. It's going to look good on a smartphone, ok on a tablet and bad on anything bigger / tethered.
- I have issue with the representation "We've optimized streaming so that you can watch 3x more video" because mostly it's:
- Inaccurate (if this is unlimited, how can unlimited be 3x what you are currently watching?);
- Inexact (if they are referring to the fact that a 480p file could in average be 1/3 of the size of a 1080p file, which is close enough), they are assuming wrongly that you are only watching HD 1080p video, while most of these providers rely on adaptive bit rate, therefore varying the video definition based on the networks' conditions.
- Wrong since most people assume watching 3X more video means spending 3X the amount of time watching video, rather than 3X the file size.
- Of bad faith, since T-Mobile limited video definition so that users wouldn't kill its network. Some product manager / marketing drone decided to turn this limitation into a feature...
- Now in the fine prints, on the rest of the video you watch that are not part of the package, expect that "Once high-speed data allotment is reached, all usage slowed to up to 2G speeds until end of bill cycle." 2G speed? for streaming video? like watching animated GIF? That's understandable, though, there has to be an carrot (and a stick) for providers who have not joined yet, as well as some fair usage rules for subscriber breaching their data plans - but 2G speed? come on, might as well stop the stream rather than pretend that you can stream anything on 128 kbps.
- More difficult to justify is the mention "service might be slowed, suspended, terminated, or restricted for misuse, abnormal use, interference with our network or ability to provide quality service to other users". So basically, there is no service level agreement for minimum quality of service. Ideally, if a video service is limited to 480p (when you are paying Netflix, etc. for 1080p or even 4K, let's remember), one should expect either guaranteed level or a minimum quality floor?
- Another vague and spurious rule is "Customers who use an extremely high amount of data in a bill cycle will have their data usage de-prioritized compared to other customers for that bill cycle at locations and times when competing network demands occur, resulting in relatively slower speeds. " This is not only vague and subjective, it will vary over time depending on location (with a 145% growth in 2 years, an abnormal video user today will be average tomorrow). More importantly, it goes against some of the net neutrality rules.
File size per hour of streamed video per definition |
T-Mobile innovates again with a truly new approach to video services. Unlike Google's project Fi, it is a bold strategy, relying on video optimization to provide a quality ceiling, integration with OTT content providers to enable the limitation but more importantly an endorsement of the service. It is likely that the service will be popular in terms of adoption and usage, it will be interesting to see, as its user base grows how user experience will evolve over time. At least, there is now a fixed ceiling for video, which will allow for network capacity planning, removing variability. What is the most remarkable in the launch, from my perspective is the desire to innovate and to take risks by launching a new service, even if there are some limitations (video definition, providers...) and risks (net neutrality).
Want to know more about how to launch a service like Binge on? What technology, vendors, price models...? You can find more in my video monetization reports and workshop.
Wednesday, November 4, 2015
What are your intentions with my network?
Over the last few months, there has been much talk about intent rather than prescription in telecom networks connectivity and traffic management. Intent is expressing a desired outcome, whereas prescription is describing the path and actions necessary for that outcome.
For instance, in a video optimization environment, intent can be "I want all users in a cell to be able to stream video to their requested definition, but if the total demand exceeds capacity, I want all videos to downgrade until they can all be simultaneously served".
The current prescriptive model could look more like:
The problem so far is that an intent can be fairly simply expressed but can result in very complex, arduous, iterative prescriptive operations. The complexity is mostly due to the fact that there are many network elements involved in the "stream video" and "demand vs. capacity" operands of that equation and that each element can interpret differently the semantics "exceed" or "downgrade".
ETSI ISG NFV and ONF have included these topics in their workload lately and ONF presented last month at the SDN & OpenFlow world forum where I participated in a panel. ONF is trying to tackle intent-based connectivity in SDN by introducing a virtualizer on the SDN controller.
The virtualizer is a common API that abstracts network-specific elements (type of elements such as router, DPI, gateways... vendors, interface, protocol, physical or virtual...) and translates intents into a modeling language used to program the different network element for the desired outcome. That "translation" requires a flexible and sophisticated rendering engine that holds stateful view of network elements, interfaces, protocols and semantics. The SDN controller would be able to arbitrate resource allocation as it does today but with a natural language programming interface.
ONF started an open source project BOULDER to create an opensource virtualizer initially for OpenDaylight and ONOS controllers.
While this is very early, I believe that virtualizer has vocation to change the balance between network engineers and programmers in mobile networks, provided that it is implemented widely amongst vendors. No doubt, much work will be necessary, as the virtualizer's rendering of natural language towards prescriptive policies looks too much like magic at that point, but the intent is good.
This and more in my "SDN & NFV in wireless networks" report and workshop.
For instance, in a video optimization environment, intent can be "I want all users in a cell to be able to stream video to their requested definition, but if the total demand exceeds capacity, I want all videos to downgrade until they can all be simultaneously served".
The current prescriptive model could look more like:
- Append cell ID to radius / diameter traffic
- Segregate HTTP traffic at the DPI
- Send HTTP to web gateway
- Segregate video traffic at the web gateway
- Send video traffic to video optimization engine
- Detect if video is
- HTTP progressive download or
- HLS or
- Adaptive bit rate or
- other
- Detect video encoding bit rate
- Measure video delivery bit rate
- Aggregate traffic per Cell ID
- If video encoding bit rate exceeds video delivery bit rate in a given cell
- Load corresponding rule from PCRF (diameter Gx interface)
- transcode if progressive download
- transrate if HLS
- Pace / throttle if Adaptive bit rate or other
- until delivery bit rate consistently exceed encoding bit rate for all streams in that cell
The problem so far is that an intent can be fairly simply expressed but can result in very complex, arduous, iterative prescriptive operations. The complexity is mostly due to the fact that there are many network elements involved in the "stream video" and "demand vs. capacity" operands of that equation and that each element can interpret differently the semantics "exceed" or "downgrade".
ETSI ISG NFV and ONF have included these topics in their workload lately and ONF presented last month at the SDN & OpenFlow world forum where I participated in a panel. ONF is trying to tackle intent-based connectivity in SDN by introducing a virtualizer on the SDN controller.
The virtualizer is a common API that abstracts network-specific elements (type of elements such as router, DPI, gateways... vendors, interface, protocol, physical or virtual...) and translates intents into a modeling language used to program the different network element for the desired outcome. That "translation" requires a flexible and sophisticated rendering engine that holds stateful view of network elements, interfaces, protocols and semantics. The SDN controller would be able to arbitrate resource allocation as it does today but with a natural language programming interface.
ONF started an open source project BOULDER to create an opensource virtualizer initially for OpenDaylight and ONOS controllers.
While this is very early, I believe that virtualizer has vocation to change the balance between network engineers and programmers in mobile networks, provided that it is implemented widely amongst vendors. No doubt, much work will be necessary, as the virtualizer's rendering of natural language towards prescriptive policies looks too much like magic at that point, but the intent is good.
This and more in my "SDN & NFV in wireless networks" report and workshop.
Labels:
NFV,
ONF,
ONOS,
opendaylight,
SDN,
virtualized
Location:
Düsseldorf, Germany
Monday, October 19, 2015
SDN world 2015: unikernels, compromises and orchestrated obsolescence
Last week's Layer123 SDN and OpenFlow World Congress brought its usual slew of announcements and claims.
From my perspective, I have retained a contrasted experience from the show.
On one hand, it is clear that SDN has now transitioned from proof of concept to commercial trial, if not full commercial deployment and operators are now increasingly understanding the limits of open source initiatives such as OpenStack for carrier-grade deployments. The telling sign is the increasing number of companies specialized in OpenFlow or other protocols high performance hardware based switches.
It feels that Open vSwitch has not hit its stride, notably in term of performance and operators are left with either going open source, cost efficient but not scalable nor performing or compromising with best of breed, hardware-based, hardened switches that offer high performance and scalability but not the agility of software-based implementation yet. What is new, however, is that operators seem ready to compromise for time to market, rather than wait for a possibly more open solution that could - or not - deliver on its promises.
On the NFV front, I feel that many vendors have been forced to lower their silly claims in term of performance, agility and elasticity. It is quite clear that many of them have been called to prove themselves in operators' labs and have failed to deliver. In many cases, vendors are able to demonstrate agility, through VM porting / positioning using either their VNFM or an orchestrator's integration, they are even, in some cases, able to show some level of elasticity with auto-scaling powered by their own EMS, and many have put out press releases with Gbps or Tbps or millions of simultaneous sessions of capacity...
... but few are able to demonstrate all three at the same time, since their performance achievement has, in many cases been relying on SR-IOV to bypass the hypervisor layer, which ties the VM to the CPU in a manner that makes agility and elasticity extremely difficult to achieve.
Operators, here again, seem bound to compromise between performance or agility if they want to accelerate their time to market.
Operators themselves came in troves to show their progress on the subject, but I felt a distinct change in tone in term of their capacity to effectively get vendors deliver on the promises of the NFV successive white papers. One issue lies flatly on the operators' attitude themselves. Many MNO are displaying unrealistic and naive expectations. They say that they are investing in NFV as a means to attain vendor independence but they are unwilling to perform any integration themselves. It is very unlikely that large Telecom Equipment Manufacturer will willingly help deconstruct their value proposition by offering commoditized, plug-and-play, open interfaced virtualized functions.
SDN and NFV integration is still dirty work. Nothing really performs at line rate without optimization, no agility, flexibility, scalability is really attained without fine tuned integration. Operators won't realize the benefits of the technology if they don't get in on the integration work themselves.
At last, what is still missing from my perspective is a service creation strategy that would make use of a virtualized network. Most network operators still mention service agility and time to market as a key driver, but when asked what they would launch if their network was fully virtualized and elastic today, they quote disappointing early examples such as virtual (!?) VPN, security or broadband on demand... timid translations of existing "services" in a virtualized world. I am not sure most of the MNOs realize their competition is not each other but Google, Netflix, Uber, Facebook and others...
By the time they launch free and unlimited voice, data and messaging services underpinned by advertising or sponsored model, it will be quite late to think of new services, even if the network is fully virtualized. It feels like MNOs are orchestrating their own obsolescence.
At last, the latest buzzwords you must have in your presentation this quarter are:
The pet and cattle analogy,
SD WAN,
5G
...and if you haven't yet formulated a strategy with respect to containers (Dockers, etc...) don't bother, they're dead and the next big thing are unikernels. This and more in my latest report and workshop on "SDN NFV in wireless networks 2015 / 2016".
From my perspective, I have retained a contrasted experience from the show.
On one hand, it is clear that SDN has now transitioned from proof of concept to commercial trial, if not full commercial deployment and operators are now increasingly understanding the limits of open source initiatives such as OpenStack for carrier-grade deployments. The telling sign is the increasing number of companies specialized in OpenFlow or other protocols high performance hardware based switches.
It feels that Open vSwitch has not hit its stride, notably in term of performance and operators are left with either going open source, cost efficient but not scalable nor performing or compromising with best of breed, hardware-based, hardened switches that offer high performance and scalability but not the agility of software-based implementation yet. What is new, however, is that operators seem ready to compromise for time to market, rather than wait for a possibly more open solution that could - or not - deliver on its promises.
On the NFV front, I feel that many vendors have been forced to lower their silly claims in term of performance, agility and elasticity. It is quite clear that many of them have been called to prove themselves in operators' labs and have failed to deliver. In many cases, vendors are able to demonstrate agility, through VM porting / positioning using either their VNFM or an orchestrator's integration, they are even, in some cases, able to show some level of elasticity with auto-scaling powered by their own EMS, and many have put out press releases with Gbps or Tbps or millions of simultaneous sessions of capacity...
... but few are able to demonstrate all three at the same time, since their performance achievement has, in many cases been relying on SR-IOV to bypass the hypervisor layer, which ties the VM to the CPU in a manner that makes agility and elasticity extremely difficult to achieve.
Operators, here again, seem bound to compromise between performance or agility if they want to accelerate their time to market.
Operators themselves came in troves to show their progress on the subject, but I felt a distinct change in tone in term of their capacity to effectively get vendors deliver on the promises of the NFV successive white papers. One issue lies flatly on the operators' attitude themselves. Many MNO are displaying unrealistic and naive expectations. They say that they are investing in NFV as a means to attain vendor independence but they are unwilling to perform any integration themselves. It is very unlikely that large Telecom Equipment Manufacturer will willingly help deconstruct their value proposition by offering commoditized, plug-and-play, open interfaced virtualized functions.
SDN and NFV integration is still dirty work. Nothing really performs at line rate without optimization, no agility, flexibility, scalability is really attained without fine tuned integration. Operators won't realize the benefits of the technology if they don't get in on the integration work themselves.
At last, what is still missing from my perspective is a service creation strategy that would make use of a virtualized network. Most network operators still mention service agility and time to market as a key driver, but when asked what they would launch if their network was fully virtualized and elastic today, they quote disappointing early examples such as virtual (!?) VPN, security or broadband on demand... timid translations of existing "services" in a virtualized world. I am not sure most of the MNOs realize their competition is not each other but Google, Netflix, Uber, Facebook and others...
By the time they launch free and unlimited voice, data and messaging services underpinned by advertising or sponsored model, it will be quite late to think of new services, even if the network is fully virtualized. It feels like MNOs are orchestrating their own obsolescence.
At last, the latest buzzwords you must have in your presentation this quarter are:
The pet and cattle analogy,
SD WAN,
5G
...and if you haven't yet formulated a strategy with respect to containers (Dockers, etc...) don't bother, they're dead and the next big thing are unikernels. This and more in my latest report and workshop on "SDN NFV in wireless networks 2015 / 2016".
Labels:
containers,
cost containment,
elasticity,
flexibility,
NFV,
performance,
SDN,
unikernel,
virtualized,
vSwitch
Location:
Düsseldorf, Germany
Tuesday, September 29, 2015
SDN NFV in Telco 2015-2016 exec summary
2015 has been the year that SDN NFV efforts “got real” in
telco networks. Past the first two years of enthusiasm that were marked by
unrealistic vendors’ announcements and a flurry of participation in standards,
open source and proof of concept projects. These efforts certainly put the
technology on the map and have set operators and vendors priorities towards the
exploration of the maturity, promises and limits of the technology.
One of the main problems with a revolutionary approach such
SDN and/or NFV implementation is that it suggests a complete network overhaul
to deliver its full benefits. In all likeliness, no network operator is able to
operate fully these kind of changes in less than a 10 years' timescale, so what
to do first?
The choice is difficult, since there are a few use cases
that have seemed easy enough to roll out but deliver little short term benefits
(vCPE, some routing and switching functions...); while the projects that should
deliver the highest savings, the meaty ones, seem quite far from maturity (EPC,
IMS, c-RAN...) in a multi-vendor elastic environment.
The problem is particularly difficult to solve because most
of the value associated with virtualization of mobile networks in the short
term is supposedly ties to capex and opex savings. The business case for saving
based on new infrastructure introduction is difficult to make without
compelling new revenues streams financing the architectural upgrade.
Islands of SDN or NFV implementations in a sea of legacy
network elements is not going to generate much saving. It could arguably
generate new revenue streams if these were used to launch new services, but
today’s focus has been so far to emulate and translate physical function and
networks into virtualized ones, with little effort in term of new service
creation.
As a result, the business case to deploy SDN or NFV in a
commercial network today is negative and likely to stay so for the next few
years unless innovative services are launched. I expect the momentum to
continue, though, since it will have to work and to deliver the expected
savings for network operators to stand a chance to stay in business.
The other side of this coin is the service offering. While flexibility, time to market and
capacity to launch new services are always quoted as some of the benefits of
network virtualization, it seems that many operators have given up on
innovation and service creation. The examples of new services are few and far
between and I would hope that these would be the object of more focused
efforts.
OTT services explosion,
combined with the progressive opacity of the traffic due to encryption conspire
to make network planning extremely difficult. Peak traffic are unpredictable
and increasing in frequency and magnitude, which makes the rationale for
network capacity purchase based on dedicated appliances untenable from an
economic standpoint.
Revenue stagnation is a
given, with little in the way of new streams from new services such as VoLTE or
M2M for a few years.
Cloud technology seems to
be the key to the new OTT providers’ agility, but its implementation supposes a
complete organization, structural and process upheaval that network operators
are hesitating to implement without a firm business case.
SDN remains a reliable,
mature technology for enterprise and IT cloud management and traffic switching
but requires important efforts to adapt to telecommunication networks, mindsets
and regulatory frameworks.
NFV is emerging as a key
potential effect multiplier but, without a viable service orchestration
framework is becoming a collection of large proprietary frameworks from legacy
telecom equipment manufacturers or an endless suite of isolated virtual network
functions with little coherent cohabitation model for a harmonious service
delivery.
All is not lost though,
with an ecosystem that is moving faster than any initiative in the telecom
standards world, it feels that the vendors, operators can find the right recipe
within a few iterations, unleashing a flexible, scalable elastic environment
for the cost-effective creation and
management of tomorrow’s services.
There is certainly a race
between the likes of AT&T, Telefonica, Deutsche Telekom on the service
provider side and Affirmed Networks, ALU, Ericsson, Huawei and HP as some of
the leading vendors to deliver on the promises of SDN and NFV in telco
networks.
This report provides a review of the main trends that are
pushing network operators to move towards simplified cost effective network
architecture and the vendors’ strategies and roadmaps to address this
disruption in their traditional architecture and revenue model.
Thursday, September 24, 2015
SDN-NFV in wireless 2015/2016 is released
As previously announced, I have been working on my new report "SDN-NFV in wireless 2015/2016" and I happy to announce its release.
The report features primary and secondary research on the state of SDN and NFV standards and open source, together with an analysis of the most advanced network operators and solutions vendors in the space.
You can download the table of contents here.
Released September 2015
130 pages
130 pages
- Operators strategy and deployments review: AT&T, China Unicom, Deutsche Telekom, EE, Telecom Italy, Telefonica, ...
- Vendors strategy and roadmap review: Affirmed networks, ALU, Cisco, Ericsson, F5, HP, Huawei, Intel, Juniper, Oracle, Red Hat...
- Drivers for SDN and NFV in telecom networks
- Public, private, hybrid, specialized clouds
- Review of SDN and NFV standards and open source initiatives
- SDN
- Service chaining
- Apache CloudStack, Microsoft Cloud OS, Red Hat, Citrix CloudPlatform, OpenStack, VMWare vCloud,
- SDN controllers (OpenDaylight, ONOS)
- SDN protocols (OpenFlow, NETCONF, ForCES, YANG...)
- NFV
- ETSI ISG NFV
- OPNFV
- OpenMANO
- NFVRG
- MEF LSO
- Hypervisors: VMWare vs. KVM, vs Containers
- How does it all fit together?
- Core and RAN networks NFV roadmap
Terms and conditions: message me at patrick.lopez@coreanalysis.ca
Labels:
ATT,
ETSI,
KVM,
mobile broadband,
NETCONF,
NFV,
ONOS,
opendaylight,
openflow,
OPNFV,
SDN,
Telefonica,
traffic management,
virtualized,
YANG
Location:
Toronto, ON, Canada
Thursday, September 10, 2015
What we can learn from ETSI ISG NFV PoCs
This post is extracted from my report SDN - NFV in Wireless.
Last year’s report had a complete review of all ETSI NFV
proof of concepts, their participants, aim and achievements. This year, I
propose a short statistical analysis of the 38 PoCs proposed to date. This
analysis provides some interesting insights on where the NFV challenges stand
today and who are the active participants in their resolution.
- 21 service providers participate in 38 PoCs at ETSI NFV
- 36% of service providers are in EMEA and responsible for 52% of trials, 41% in APAC, responsible for 25% of trials and 23% in North America, responsible for 23% of trials.
Out of 38 PoCs, only 31% have seen an active participation
from one or several operators, the rest of the PoCs have seen operators take a
back seat and either lend their name to the process (at least one operator must
be involved for a PoC to be validated) or provide high level requirements and
feedback. The most active operators have been Deutsche Telekom and NTT, but
only on the first PoCs in 2014. After that operator’s participation
has been spotty, suggesting that those heavily involved at the beginning of the
process have moved on to private PoCs and trials. Since the Q1 2015, 50% of
PoCs see direct operator involvement, ranging from Orchestration, NFVI or VIM
with operators who are mostly new to NFV, suggesting a second wave of service
providers are getting into the fray with a more hands-on approach.
Figure 36: Operators activity in PoC
Out of the 52 operators participating in the 38 Pocs, Telefonica,
AT&T and BT, DT, NTT, Vodafone account for 62% of all PoCs, while other
operators have only been involved in one PoC or are just starting. Telefonica has
been the most active overall, but with all of its involvement in 2014, no new
PoC participation in 2015. AT&T has been involved throughout 2014 and has
only recently restarted a PoC in 2015. British Telecom has been the most
regular since the start of the program with in average close to one PoC per
quarter.
Figure 37: ETSI NFV PoC operators’ participation
On the vendors’ front, 87 vendors and academic institutions
have participated to date to the PoCs, led by HP and Intel (found respectively
in 8% of PoCs). The second tier of participants includes, in descending order, Brocade,
Alcatel Lucent, Huawei, red hat and Cisco, who are represented in between 5 and
3% of the PoCs. Overwhelmingly, in 49% of the cases, vendors participated to
only one PoC.
The most interesting statistics in my mind is showing that
squarely half of the PoCs are using SDN for virtual networking or VIM and the
same proportion (but not necessarily the same PoCs) have deployed a VNF
orchestrator in some form.
Labels:
ETSI,
NFV,
SDN,
traffic management,
virtualized
Location:
Toronto, ON, Canada
Friday, September 4, 2015
Video is eating the internet: clouds, codecs and alliances
A couple of news should have caught your attention this week if you are interested the video streaming business.
Amazon Web Services confirmed yesterday the acquisition of Elemental. This is the outcome of a trend that I was highlighting in my SDN / NFV report and workshops for the last year with the creation of specialized clouds. Elemental's products are software based and the company was the first in professional video to offer cloud-based encoding on Amazon EC2 with a PaaS offering. Elemental has been building virtual private clouds on commercial clouds for their clients and was the first to coin the term "Software Defined Video". As Elemental joins AWS, Amazon will be one of the first commercial clouds to offer a global, turnkey, video encoding, workflow, packaging infrastructure in the cloud. Video processing requires specific profiles in a cloud environment and it is not surprising that companies who have cloud assets look at creating cloud slices or segregated virtual environment to manage the processing heavy, latency sensitive service.
The codec war has been on for a long time, and I had previously commented on it. In other news, we have seen Amazon again join Cisco, Google, Intel, Microsoft Mozilla and Netflix in the Alliance for Open Media. This organization's goal is to counter unreasonable claims made by H.265 / HEVC patent holders called HEVC advance who are trying to create a very vague and very expensive licensing agreement for the use of their patents. The group, composed of Dolby, GE, Mitsubishi Electric and Technicolor is trying to enforce a 0.5% fee on any revenue associated with the codec's use. The license fee would apply indiscriminately to all companies who encode, decode, transmit, display HEVC content. If H.265 was to be as successful as H.264, it would account in the future for over 90% of all video streaming traffic and that 0.5% tax would be presumably levied on any content provider, aggregator, APP, web site... HEVC advance could become the most profitable patent pool ever, with 0.5% of the revenues of Google, Facebook or Apple's video business. The group does not stop there and proposes a license fee on devices as well, from smartphones, to tablets, to TVs or anything that has a screen and a video player able to play H.265 videos... Back to the Alliance for Open Media who has decided to counter attack and vows to create a royalty-free next generation video codec. Between Cisco's Thor, Google's VPx and Mozilla Daala, this is a credible effort to counter HEVC advance.
The Streaming Video Alliance, created in 2014 to provide a forum for TV, cable, content owners and service providers to improve the internet video streaming experience welcomes Sky and Time Warner Cable to the group already composed of Alcatel-Lucent, Beamr, CableLabs, Cedexis, Charter Communications, Cisco, Comcast, Conviva, EPIX, Ericsson, FOX Networks, Intel, Irdeto, Korea Telecom, Level 3 Communications, Liberty Global, Limelight Networks, MLB Advanced Media, NeuLion, Nominum, PeerApp, Qwilt, Telecom Italia, Telstra, Ustream, Verizon, Wowza Media Systems and Yahoo!. What is remarkable, here is the variety of the group, where MSOs, vendors, service providers are looking at transparent caching architectures and video metadata handling outside of the standards, to counter specialized video delivery networks such as Apple's, Google's and Netflix'
All in all, video is poised to eat the internet and IBC, starting next week will no doubt bring a lot more exciting announcements. The common denominator, here is that all these companies have identified that encoding, managing, packaging, delivering video well will be a crucial differentiating factor in tomorrow's networks. Domination of only one element of the value chain (codec, network, device...) will guarantee great power in the ecosystem. Will the vertically integrated ecosystems such as Google and Apple yield as operators, distributor and content owners organize themselves? This and much more in my report on video monetization in 2015.
Amazon Web Services confirmed yesterday the acquisition of Elemental. This is the outcome of a trend that I was highlighting in my SDN / NFV report and workshops for the last year with the creation of specialized clouds. Elemental's products are software based and the company was the first in professional video to offer cloud-based encoding on Amazon EC2 with a PaaS offering. Elemental has been building virtual private clouds on commercial clouds for their clients and was the first to coin the term "Software Defined Video". As Elemental joins AWS, Amazon will be one of the first commercial clouds to offer a global, turnkey, video encoding, workflow, packaging infrastructure in the cloud. Video processing requires specific profiles in a cloud environment and it is not surprising that companies who have cloud assets look at creating cloud slices or segregated virtual environment to manage the processing heavy, latency sensitive service.
The codec war has been on for a long time, and I had previously commented on it. In other news, we have seen Amazon again join Cisco, Google, Intel, Microsoft Mozilla and Netflix in the Alliance for Open Media. This organization's goal is to counter unreasonable claims made by H.265 / HEVC patent holders called HEVC advance who are trying to create a very vague and very expensive licensing agreement for the use of their patents. The group, composed of Dolby, GE, Mitsubishi Electric and Technicolor is trying to enforce a 0.5% fee on any revenue associated with the codec's use. The license fee would apply indiscriminately to all companies who encode, decode, transmit, display HEVC content. If H.265 was to be as successful as H.264, it would account in the future for over 90% of all video streaming traffic and that 0.5% tax would be presumably levied on any content provider, aggregator, APP, web site... HEVC advance could become the most profitable patent pool ever, with 0.5% of the revenues of Google, Facebook or Apple's video business. The group does not stop there and proposes a license fee on devices as well, from smartphones, to tablets, to TVs or anything that has a screen and a video player able to play H.265 videos... Back to the Alliance for Open Media who has decided to counter attack and vows to create a royalty-free next generation video codec. Between Cisco's Thor, Google's VPx and Mozilla Daala, this is a credible effort to counter HEVC advance.
All in all, video is poised to eat the internet and IBC, starting next week will no doubt bring a lot more exciting announcements. The common denominator, here is that all these companies have identified that encoding, managing, packaging, delivering video well will be a crucial differentiating factor in tomorrow's networks. Domination of only one element of the value chain (codec, network, device...) will guarantee great power in the ecosystem. Will the vertically integrated ecosystems such as Google and Apple yield as operators, distributor and content owners organize themselves? This and much more in my report on video monetization in 2015.
Labels:
4K,
Amazon,
broadcast,
cloud,
codecs,
HEVC,
mobile video,
Video delivery,
video optimization
Location:
Toronto, ON, Canada
Wednesday, August 12, 2015
The orchestrator conundrum in SDN and NFV
We have seen over the last year a flurry of activity around orchestration in SDN and NFV. As I have written about here and here, orchestration is a key element and will likely make or break SDN and NFV success in wireless.
A common mistake associated with orchestration is that it covers the same elements or objectives in SDN and NFV. It is a great issue, because while SDN orchestration is about resource and infrastructure management, NFV should be about service management. There is admittedly a level of overlap, particularly if you define services as both network and customer sets of rules and policies.
To simplify, here we'll say that SDN orchestration is about resource allocation, virtual, physical and mixed infrastructure auditing, insurance and management, while NFV's is about creating rules for traffic and service instantiation based on subscriber, media, origin, destination, etc...
The two orchestration models are complementary (it is harder to create and manage services if you do not have visibility / understanding of available resources and conversely, it can be more efficient to manage resource knowing what services run on them) but not necessarily well integrated. A bevy of standards and open source organizations (ETSI ISG NFV, OPNFV, MEF, Openstack, Opendaylight...) are busy trying to map one with another which is no easy task. SDN orchestration is well defined in term of its purview, less so in term of implementation, but a few models are available to experiment on. NFV is in its infancy, still defining what the elements of service orchestration are, their proposed interfaces with the infrastructure and the VNF and generally speaking how to create a model for service instantiation and management.
For those who have followed this blog and my clients who have attended my SDN and NFV in wireless workshop, it is well known that the management and orchestration (MANO) area is under intense scrutiny from many operators and vendors alike.
Increasingly, infrastructure vendors who are seeing the commoditization of their cash cow understand that the brain of tomorrow's network will be in MANO.
Think of MANO as the network's app store. It controls which apps (VNFs) are instantiated, what level of resource is necessary to manage them and stitch (service chaining) VNF together to create services.
The problem, is that MANO is not yet defined by ETSI, so anyone who wants to orchestrate VNFs today either is building its own or is stuck with the handful of vendors who are providing MANO-like engine. Since MANO is ill-defined, the integration requires a certain level of proprietary effort. Vendors will say that it is all based on open interfaces, but the reality is that there is no mechanism in the standard today for a VNF to declare its capabilities, its needs and its intent, so a MANO integration requires some level of abstraction or deep fine tuning,
As a result, MANO can become very sticky if deployed in an operator network. The VNFs can come and go and vendors can be swapped at will, but the MANO has the potential to be a great anchor point.
It is not a surprise therefore to see vendors investing heavily in this field or acquiring the capabilities:
At the same time, Telefonica has launched an open source collaborative effort called openMANO to stimulate the industry and reduce risks of verticalization of infrastructure / MANO vendors.
For more information on how SDN and NFV are implemented in wireless networks, vendors and operators strategies, look here.
A common mistake associated with orchestration is that it covers the same elements or objectives in SDN and NFV. It is a great issue, because while SDN orchestration is about resource and infrastructure management, NFV should be about service management. There is admittedly a level of overlap, particularly if you define services as both network and customer sets of rules and policies.
To simplify, here we'll say that SDN orchestration is about resource allocation, virtual, physical and mixed infrastructure auditing, insurance and management, while NFV's is about creating rules for traffic and service instantiation based on subscriber, media, origin, destination, etc...
The two orchestration models are complementary (it is harder to create and manage services if you do not have visibility / understanding of available resources and conversely, it can be more efficient to manage resource knowing what services run on them) but not necessarily well integrated. A bevy of standards and open source organizations (ETSI ISG NFV, OPNFV, MEF, Openstack, Opendaylight...) are busy trying to map one with another which is no easy task. SDN orchestration is well defined in term of its purview, less so in term of implementation, but a few models are available to experiment on. NFV is in its infancy, still defining what the elements of service orchestration are, their proposed interfaces with the infrastructure and the VNF and generally speaking how to create a model for service instantiation and management.
For those who have followed this blog and my clients who have attended my SDN and NFV in wireless workshop, it is well known that the management and orchestration (MANO) area is under intense scrutiny from many operators and vendors alike.
Increasingly, infrastructure vendors who are seeing the commoditization of their cash cow understand that the brain of tomorrow's network will be in MANO.
Think of MANO as the network's app store. It controls which apps (VNFs) are instantiated, what level of resource is necessary to manage them and stitch (service chaining) VNF together to create services.
The problem, is that MANO is not yet defined by ETSI, so anyone who wants to orchestrate VNFs today either is building its own or is stuck with the handful of vendors who are providing MANO-like engine. Since MANO is ill-defined, the integration requires a certain level of proprietary effort. Vendors will say that it is all based on open interfaces, but the reality is that there is no mechanism in the standard today for a VNF to declare its capabilities, its needs and its intent, so a MANO integration requires some level of abstraction or deep fine tuning,
As a result, MANO can become very sticky if deployed in an operator network. The VNFs can come and go and vendors can be swapped at will, but the MANO has the potential to be a great anchor point.
It is not a surprise therefore to see vendors investing heavily in this field or acquiring the capabilities:
- Cisco acquired TailF in 2014
- Ciena acquired Cyan this year
- Cenx received 12,5m$ in funding this year...
At the same time, Telefonica has launched an open source collaborative effort called openMANO to stimulate the industry and reduce risks of verticalization of infrastructure / MANO vendors.
For more information on how SDN and NFV are implemented in wireless networks, vendors and operators strategies, look here.
Labels:
MANO,
NFV,
opendaylight,
openstack,
OPNFV,
SDN,
service chaining,
service enablement,
Telefonica,
virtualized
Location:
Toronto, ON, Canada
Tuesday, July 28, 2015
Citrix selling Bytemobile
In a press release dated July 28, Citrix Systems has announced that it will collaborate with Elliott Management, an activist investment firm who has amassed 7.1% of the company's common stock and has been advocating for strategic changes in Citrix' product portfolio and operations.
Elliott had announced their plans to actively be involved in Citrix' strategy in a letter to their board on June 11. The letter laid out a plan for Citrix' stock growth and investor value creation including executive and operational changes, as well as spin off or sale of business units, including ByteMobile, acquired for $435m in 2012.
Citrix has announced that they have retained financial advisors for the sale of ByteMobile.
Concurrent with the announcement that Citrix will collaborate with Elliott and give them a board seat, Citrix' CEO has announced his retirement effective as soon as a replacement is found.
Elliott had announced their plans to actively be involved in Citrix' strategy in a letter to their board on June 11. The letter laid out a plan for Citrix' stock growth and investor value creation including executive and operational changes, as well as spin off or sale of business units, including ByteMobile, acquired for $435m in 2012.
Citrix has announced that they have retained financial advisors for the sale of ByteMobile.
Concurrent with the announcement that Citrix will collaborate with Elliott and give them a board seat, Citrix' CEO has announced his retirement effective as soon as a replacement is found.
Thursday, July 9, 2015
Announcing SDN / NFV in wireless 2015
On the heels of my presentation at the NFV world congress in San Diego this spring, my presentation and panels at LTE world summit on network visualization and my anticipated participation at SDN & OpenFlow world Summit in the fall, I am happy to announce production for "SDN / NFV in wireless networks 2015".
This report, to be released in September, will feature my review of the progress of SDN and NFV as technologies transitioning from PoC to commercial trials and limited deployments in wireless networks.
The report provides a step by step strategy for introducing SDN and NFV in your product and services development.
This report, to be released in September, will feature my review of the progress of SDN and NFV as technologies transitioning from PoC to commercial trials and limited deployments in wireless networks.
The report provides a step by step strategy for introducing SDN and NFV in your product and services development.
- Drivers for SDN and NFV in telecom networks
- Public, private, hybrid, specialized clouds
- Review of SDN and NFV standards and open source initiatives
- SDN
- Service chaining
- Apache CloudStack, Microsoft Cloud OS, Red Hat, Citrix CloudPlatform, OpenStack, VMWare vCloud,
- SDN controllers (OpenDaylight, ONOS)
- SDN protocols (OpenFlow, NETCONF, ForCES, YANG...)
- NFV
- ETSI ISG NFV
- OPNFV
- OpenMANO
- NFVRG
- MEF LSO
- Hypervisors: VMWare vs. KVM, vs Containers
- How does it all fit together?
- Core and RAN networks NFV roadmap
- Operators strategy and deployments review: AT&T, China Unicom, Deutsche Telekom, EE, Telecom Italy, Telefonica, Verizon...
- Vendors strategy and roadmap review: Affirmed networks, ALU, Cisco, Ericsson, F5, HP, Huawei, Intel, Juniper, Oracle, Red Hat...
Can't wait for the report? Want more in-depth and personalized training? A 5 hours workshop and strategy session is available now to answer your specific questions and help you chart your product and services roadmap, while understanding your competitors' strategy and progress.
Labels:
cloud,
ETSI,
MANO,
MEC,
mobile broadband,
NFV,
opendaylight,
openflow,
openstack,
OPNFV,
SDN,
traffic management
Location:
Toronto, ON, Canada
Wednesday, June 24, 2015
Building a mobile video delivery network? part III
Content providers and aggregators have obviously an interest (and in some case a legal obligation) to control the quality of the content they sell to a consumer. Without owning networks outright to deliver the content, they rent capacity, under specific service level agreements to deliver this content with managed Quality of Experience. When the content is delivered over the “free” internet or a mobile network, there is no QoE guarantee. As a result, content providers and aggregators tend to “push the envelope” and grab as much network resource as available to deliver a video stream, in an effort to equate speed and capacity to consumer QoE. This might work on fixed networks, but in mobile, where capacity is limited and variable, it causes congestion.
Obviously, delegating the selection of the quality of the content to a device should be a smart move. Since the content is played on the device, this is where there is the clearest understanding of instantaneous network capacity or congestion. Unfortunately, certain handset vendors, particularly those coming from the consumer electronics world do not have enough experience in wireless IP for efficient video delivery. Some devices for instance will go and grab the highest capacity available on the network, irrespective of the encoding of the video requested. So, for instance if the capacity at connection is 2Mbps and the video is encoded at 1Mbps, it will be downloaded at twice its rate. That is not a problem when the network is available, but as congestion creeps in, this behaviour snowballs and compounds congestion in embattled networks.
As more and more device manufacturers coming from the computing world (as opposed to mobile) enter the market with smartphones and tablets, we see wide variations in the implementation of their native video player.
Consequently, operators are looking at way to control video traffic as a means to maybe be able to monetize it differently in the future. Control can take many different aspects and rely on many technologies ranging from relatively passive to increasingly obtrusive and aggressive.
In any case, the rationale for implementing video control technologies in mobile networks goes beyond the research for the best delivery model. At this point in time, the actors have equal footing and equal interest in preserving users QoE. They have elected to try and take control of the value chain independently. This has resulted in a variety of low level battles, where each side is trying to assert control over the others.
The proofs of these battles are multiple:
- Google tries to impose VP9 as an alternative to H.265 /HEVC: While the internet giant rationale to provide a royalty-free codec as the next high efficiency codec seems innocuous to some, it is a means to control the value chain. If content providers start to use VP9 instead of H.265, Google will have the means to durably influence the roadmap to deliver video content over the internet.
- Orange extracts peering fees from Google / YouTube in Africa: Orange as a dominant position for mobile networks and backhaul in Africa and has been able to force Google to the negotiating table and get them to pay peering fee for delivering YouTube over wireless networks. A world’s first.
- Network operators implement video optimization technologies: In order to keep control of the OTT videos delivered on their networks, network operators have deployed video optimization engine to reduce the volume of traffic, to alleviate congestion or more generally to keep a firmer grip on the type of traffic transiting their networks.
- Encryption as an obfuscation mechanism: Content or protocol encryption has traditionally been a means to protect sensitive content from interception, reproduction or manipulation. There is a certain cost and latency involved in the encoding and decoding of the content, so it has remained mostly used for premium video. Lately, content providers have been experimenting with the delivery of encrypted video as a means to obfuscate the traffic and stop network operators from interfering with it.
- Net neutrality debate, when pushed by large content providers and aggregators is oftentimes a proxy for commercial battle. Th economics of the internet have evolved from browsing to streaming and video has disrupted the models significantly. The service level agreements put in place by the distribution chains (CDNs, peering points...) are somewhat inadequate for video delivery.
We could go on and on listing all the ways that content providers and network operators are probing each other’s capacity to remain in control of the user’s video experience. Ultimately, these initiatives are isolated but are signs of large market forces trying to establish dominance over each other. So far, these manoeuvres have reduced the user experience. The market will settle in a more collaborative mode undoubtedly as the current behaviour could lead to mutually assured destruction. The reality is simple. There is a huge appetite for online video. An increasing part of it takes place on mobile devices, on cellular networks. There is money to be made if there is collaboration, the size of the players is too large to establish a durable dominance without vertical integration.
Tuesday, June 23, 2015
Building a mobile video delivery network? part II
Frequently, in my interactions with vendors and content
providers alike, the same questions are brought up. Why aren’t content
providers better placed to manage the delivery of the content they own rather
than network operators? Why are operators implementing transcoding technologies
in their networks, when content providers and CDN have similar capabilities and
a better understanding of the content they deliver? Why should operators be
involved in controlling the quality of a content or service that is not on
their network?
In every case, the answer is the same. It is about control.
If you look at the value chain of delivering content over wireless networks, it
is clear that technology abounds when it comes to controlling the content, its
quality, its delivery and its associated services at the device, in the
network, in the CDN and at the content provider. Why are all the actors in the
delivery chain seemingly hell-bent on overstepping each other’s boundary and
wrestle each other’s capacity to influence content delivery?
To answer this question, you need to understand how content
used to be sold in mobile networks. Until fairly recently, the only use case of
“successful” content being sold on mobile networks was ringtones. In order to
personalize your phone, one use to go to their operator’s portal and buy a
ringtone to download to one’s device. The ringtones were sold by the operator,
charged on one’s wireless bill, provided by an aggregator, usually
white-labelled who would receive a percentage of the sale, and then kick back
another percentage of their share to the content provider itself who created
the ringtone.
That model was cherished by network operators. They had full
control of the experience, selecting themselves the content aggregator, in some
case the content providers, negotiating the rates from a position of power, and
selling to the customer under their brand, in their branded environment, on
their bills.
This is a long way from today’s OTT, where content and
services are often free for the user, monetized through advertisement or other
transparent scheme, with content selected by the user, purchased or sourced
directly on the content provider’s site, with no other involvement from the
network operator than the delivery itself. These OTT (Over-The-Top) services
threaten the network operator’s business model. Voice and messaging are the
traditional revenue makers fro operators and are decreasing year over year in
revenue, while increasing on volume due to the fierce competition of OTT
providers. These services remain hugely profitable for networks and technology
has allowed great scalability with small costs increments, promising healthy
margins for a long while. Roaming prices are still in many cases extortionate.
While some legislators are trying to get users fairer prices, it will be a long
time before they disappear altogether.
Data, in comparison, is still uncharted territory. Until
recently, the service was not really monetized, used as an appeal product to
entice consumers to sign for longer term contracts. This is why so many
operators initially launched unlimited data services. 3G, and more recently LTE
have seen the latest examples of operators subsidizing data services for
customer acquisition.
The growth of video in mobile networks is upsetting this
balance though. The unpredictability and natural propensity of video to expand
and monopolize network resources makes it a more visible and urgent threat as
an OTT service. Data networks have greatly evolved with LTE with better
capacity, speed and latency than 3G. But
the price paid to increase network capacity is still in the order of billions
of dollars, when one has to take into account spectrum, licenses, real estate
and deployment. Unfortunately, the growth in video in term of users, usage and
quality outstrips the progress made in transport technology. As a result, when
network operators look at video compounded annual growth rate exceeding 70%, they
realize that serving the demand will continue to be a costly proposition if
they are not able to control or monetize it. This is the crux of the issue.
Video, as part of data is not today charged in a very sophisticated manner. It
is either sold as unlimited, as a bucket of usage and/or speed. The price of
data delivery today will not cover the cost of upgrading network capacity in
the future if network operators cannot control better video traffic.
Additionally, both content providers and device vendors have
diametrically opposed attitude in this equation. Device manufacturers, mobile
network operators and content providers all want to deliver the best user
experience for the consumer. The lack of cooperation between the protagonists
in the value chain results paradoxically in an overall reduced user experience.
Wednesday, June 10, 2015
Google's MVNO - Project Fi is disappointing
A first look at Google's MVNO to launch in the US on the Sprint and T-Mobile networks reveals itself a little disappointing (or a relief if you are a network operator). I had chronicled the announcement of the launch from Mobile World Congress and expected much more disruption in services and pricing than what is announced here.
The MVNO, dubbed project Fi, is supposed to launch shortly and you have to request an invitation to get access to it (so mysterious and exciting...).
At first glance, there is little innovation in the service. The Google virtual network will span two LTE networks from different providers (but so is Virgin's in France for instance) and will also connect "seamlessly" to the "best" wifi hotspot. It will be interesting to read the first feedback on how the device selects effectively the best signal from these three options and how dynamically that selection occurs. Handover mid call or mid data sessions are going to be an interesting use case, Google assures you that the transition will be "seamless".
On the plus side, Google has really taken a page from Iliad's free disruptive service launched in France and one-time rumored to acquire T-Mobile US. See here the impact their pricing strategy has had on the French telecommunications market.
Here are, in my mind, the biggest drawbacks with the service as it is described.
The MVNO, dubbed project Fi, is supposed to launch shortly and you have to request an invitation to get access to it (so mysterious and exciting...).
At first glance, there is little innovation in the service. The Google virtual network will span two LTE networks from different providers (but so is Virgin's in France for instance) and will also connect "seamlessly" to the "best" wifi hotspot. It will be interesting to read the first feedback on how the device selects effectively the best signal from these three options and how dynamically that selection occurs. Handover mid call or mid data sessions are going to be an interesting use case, Google assures you that the transition will be "seamless".
On the plus side, Google has really taken a page from Iliad's free disruptive service launched in France and one-time rumored to acquire T-Mobile US. See here the impact their pricing strategy has had on the French telecommunications market.
- Fi Basic service comes with unlimited US talk and text, unlimited international text and wifi tethering for $20 per month.
- The subscriber is supposed to set a monthly data budget, whereas he selects a monthly amount and prepays 10$ per GB. At the end of the month, the amount of unused data is credited back for 1c / MB towards the following month. The user can change their budget on a monthly basis. Only cellular data is counted towards usage, not wifi. That's simple, easy to understand and after a little experimentation, will feel very natural.
- No contract, no commitment (except that you have to buy a 600+$ Nexus phone).
- You can send and receive all cellular texts and calls using Google hangouts on any device.
- Data roaming is same price as domestic but... see drawbacks
Here are, in my mind, the biggest drawbacks with the service as it is described.
- The first big disappointment is that the service will run initially only on Google's Nexus 6. I have spoken at length on the dangers and opportunities of a fully vertical delivery chain in wireless networks and Google at first seems to pile up the drawbacks (lack of device choice) with little supposed benefits (where is the streamlined user experience?).
- "Project Fi connects you automatically to free, open wifi networks that do not require any action to get connected". I don't know you, but I don't think I have ever come across one of these mysterious hotspots in the US. Even Starbucks or MC Donald free hot spots require to accept terms and conditions and the speed is usually lower than LTE.
- Roaming data speed limited to 256 kbps! really? come on, we are in 2015. Even if you are not on LTE, you can get multi Mbps on 3G / HSPA. Capping at that speed means that you will not be streaming video, tethering or using data hungry apps (Facebook, Netflix, Periscope, Vine, Instagram...). What's the point, at this stage, better say roaming only on wifi (!?).
In conclusion, it is an interesting "project", that will be sure to make some noise and have an impact on the already active price war between operators in the US, but on the face of it, there is too little innovation and too much hassle to become a mass market proposition. Operators still have time to figure out new monetization strategies for their services, but more than ever, they must choose between becoming wholesaler or value added providers.
Monday, June 8, 2015
Data traffic optimization feature set
Data traffic optimization in wireless networks has reached a mature stage as a technology . The innovations that have marked the years 2008 – 2012 are now slowing down and most core vendors exhibit a fairly homogeneous feature set.
The difference comes in the implementation of these features and can yield vastly different results, depending on whether vendors are using open source or purpose-built caching or transcoding engines and whether congestion detection is based on observed or deduced parameters.
Vendors tend nowadays to differentiate on QoE measurement / management, monetization strategies including content injection, recommendation and advertising.
Here is a list of commonly implemented optimization techniques in wireless networks.
- TCP optimization
- Buffer bloat management
- Round trip time management
- Web optimization
- GZIP
- JPEG / PNG… transcoding
- Server-side JavaScript
- White space / comments… removal
- Lossless optimization
- Throttling / pacing
- Caching
- Adaptive bit rate manipulation
- Manifest mediation
- Rate capping
- Lossy optimization
- Frame rate reduction
- Transcoding
- Online
- Offline
- Transrating
- Contextual optimization
- Dynamic bit rate adaptation
- Device targeted optimization
- Content targeted optimization
- Rule base optimization
- Policy driven optimization
- Surgical optimization / Congestion avoidance
- Congestion detection
- TCP parameters based
- RAN explicit indication
- Probe based
- Heuristics combination based
- Encrypted traffic management
- Encrypted traffic analytics
- Throttling / pacing
- Transparent proxy
- Explicit proxy
- QoE measurement
- Web
- page size
- page load time (total)
- page load time (first rendering)
- Video
- Temporal measurements
- Time to start
- Duration loading
- Duration and number of buffering interruptions
- Changes in adaptive bit rates
- Quantization
- Delivery MOS
- Spatial measurements
- Packet loss
- Blockiness
- Blurriness
- PSNR / SSIM
- Presentation MOS
An explanation of each technology and its feature set can be obtained as part of the mobile video monetization report series or individually as a feature report or in a workshop.
Labels:
adaptive streaming,
analytics,
bit rate,
browsing,
caching,
codecs,
congestion,
containers,
content based charging,
DBRA,
encryption,
mobile broadband,
mobile video,
QoE,
throttling,
traffic management,
transcoding
Location:
Toronto, ON, Canada
Wednesday, May 13, 2015
Mobile video monetization: the need for a mediation layer
Extracted from my latest report, mobile video monetization 2015.
[...] What is clear from my perspective, is that the stabilization
of the value chain for monetizing video content
in mobile networks is unlikely to happen quickly without an interconnect /
mediation layer. OTT and content providers are increasingly collaborating, when
it comes to enabling connections and to zero rate data traffic; but
monetization plays involving advertising, sponsoring, price comparison,
recommendation, geo-localized segmented offering, is really in its infancy.
Publishers are increasing their inventory, announcers are
targeting mobile screens, but network operators still have no idea how to enable
this model in a scalable manner, presumably because many OTT whose model is
ad-dependant are not willing yet to share that revenue without a well-defined
value.
Intuitively, there are many elements that today reside in an
operator’s network that would enrich and raise the value of ad models in in a
mobile environment. Whether performance or impression driven, advertising
relies on contextualization for engagement. A large part of that context
could/should be whether the user is on wifi, on cellular network, whether he’s
at home, work or in transit, whether he is a prepaid or postpaid subscriber,
how much data or messaging is left in
its monthly allotment, whether the cell he is in is congested, or
whether he is experiencing impairments because he is far from the antenna or
because he is being throttled because he is close to the end of his quota, whether he is roaming or in his home network…
The list goes on and on in term of data points that can enrich or prevent a
successful engagement in a mobile environment.
On the network front, understanding whether a content is an
ad or not, whether it is sponsored or not, whether it is performance or
impression-measured, whether it can be modified, replaced or removed at all
from a delivery would be tremendously important to categorize and manage
traffic accurately.
Of course, part of the problem is that no announcer, content
provider, aggregator or publisher want to have to cut deals with the 600+ mobile
network operators and the 800+ MVNO
individually if they do not have to.
Since there is no standard API to really exchange these data
in a meaningful, yet anonymized fashion, the burden resides on the parties to,
on a case by case basis, create the basis for this interaction, from a
technical and commercial standpoint. This is not scalable and won’t work fast
enough for the market to develop meaningfully.
This is not the first time a similar problem occurred in
mobile networks, and whether about data network or messaging interconnection,
roaming, or inter-network settlements, IPX and interconnect companies have
emerged to facilitate the pain of mediating traffic, settlements between
networks.
There is no reason that a similar model shouldn’t work for
connecting mobile networks, announcers and OTT providers in a meaningful
clearing house type of partnership. There is no technical limitation here, it
just needs a transactional engine separating control plane from data plane
integrated with ad networks, IPX and a
meaningful API to carry on the control
plane subscriber together with session information both ways (from the network
to the content provider and vice versa). Companies who could make this happen
could be traditional IPX providers such as Syniverse, but it is more likely
that company with more advertising DNA such as Opera, Amazon or Google would be
better bets. [...]
Subscribe to:
Posts (Atom)