Short interview by Qosmos on the state of SDN & NFV in wireless networks in 2015.
Showing posts with label DPI. Show all posts
Showing posts with label DPI. Show all posts
Tuesday, April 7, 2015
Friday, May 2, 2014
NFV & SDN part I
You will remember close to 15 years ago when all telecom platforms had to be delivered on hardened SUN Solaris SPARC NEBS certified with full fledged Oracle database to be "telecom grade". Little by little, x86 platforms, MySQL databases and Linux OS have penetrated the ecosystem. It was originally a vendor-driven initiative to reduce their third party cost. The cost reduction was passed on to MNOs who were willing to risk implementing these new platforms. We have seen their implementation grow from greenfield operators in emerging countries, to mature markets first at the periphery of the network, slowing making their way to business-critical infrastructure.
We are seeing today an analogous push to reduce costs further and ban proprietary hardware implementations with NFV. Pushed initially by operators, this initiative sees most network functions first transiting from hardware to software, then being run on virtualized environments on off-the-shelf hardware.
The first companies to embrace NFV have been "startup" like Affirmed Networks. First met with scepticism, the company seems to have been able to design from scratch and deploy commercially a virtualized Evolved Packet Core in only 4 years. It certainly helps that the company was founded to the tune of over 100 millions dollars from big names such as T-Ventures and Vodafone, providing not only founding but presumably the lab capacity at their parent companies to test and fine tune the new technology.
Since then, vendors have started embracing the trend and are moving more or less enthusiastically towards virtualization of their offering. We have seen emerging different approaches, from the simple porting of their software to Xen or VMWare virtualized environments to more achieved openstack / openflow platforms.
I am actively investigating the field and I have to say some vendors' strategies are head scratching. In some cases, moving to a virtualized environment is counter-productive. Some telecom products are highly CPU intensive / specialized and require dedicated resource to attain high performance, scalability in a cost effective package. Deep packet inspection, video processing seem to be good examples. Even those vendors who have virtualized their appliance / solution when pushed will admit that virtualization will come at a performance cost at the state of the technology today.
I have been reading the specs (openflow, openstack) and I have to admit they seem far from the level of detail that we usually see in telco specs to be usable. A lot of abstraction, dedicated to redefining switching, not much in term of call flow, datagram, semantics, service definition, etc...
How the hell does one go about launching a service in a multivendor environment? Well, one doesn't. There is a reason why most NFV initiative are still at the plumbing level, investigating SDN, SDDC, etc... Or single vendor / single service approach. I haven't been convinced yet by anyone's implementation of multi vendor management, let alone "service orchestration". We are witnessing today islands of service virtualization in hybrid environments. We are still far from function virtualization per se.
The challenges are multiple:
- Which is better?: A dedicated platform with low footprint / power requirement that might be expensive and centralized or thousand of virtual instances occupying hundreds of servers that might be cheap (COTS) individually but collectively not very cost or power efficient?
- Will network operator trade Capex for Opex when they need to manage thousand of applications running virtually on IT platforms? How will their personnel trained to troubleshoot problems following the traffic and signalling path will adapt to this fluid non-descript environment?
We are still early in this game, but many vendors are starting to purposefully position themselves in this space to capture the next wave of revenue.
Stay tuned, more to come with a report on the technology, market trends and vendors capabilities in this space later on this year.
Monday, January 20, 2014
All packets are not created equal: why DPI and policy vendors look at video encoding
As we are still contemplating the impact of last week's US ruling on net neutrality, I thought I would attempt today to settle a question I often get in my workshops. Why is DPI insufficient when it comes to video policy enforcement?
Deep packet inspection platforms have evolved from a static rules-based filtering engine to a sophisticated enforcement point allowing packet and protocol classification, prioritization and shaping. Ubiquitous in enterprises and telco networks, they are the jack-of-all-trade of traffic management, allowing such a diverse set of use cases as policy enforcement, adult content filtering, lawful interception, QoS management, peer-to-peer throttling or interdiction, etc...
DPIs rely first on a robust classification engine. It snoops through data traffic and classifies each packet based on port, protocol, interface, origin, destination, etc... The more sophisticated engines go beyond layer 3 and are able to recognize classes of traffic using headers. This classification engine is sufficient for most traffic type inspection, from web browsing to email, from VoIP to video conferencing or peer-to-peer sharing.
The premise, here is that if you can recognize, classify, tag traffic accurately, then you can apply rules governing the delivery of this traffic, ranging from interdiction to authorization, with many variants of shaping in between.
DPI falls short in many cases when it comes to video streaming. Until 2008 or so, most video streaming was relying on specialized protocols such as RTSP. The classification was easy, as the videos were all encapsulated in a specific protocol, allowing instantiation and enforcement of rules in pretty straightforward manner. The emergence and predominance of HTTP based streaming video (progressive download, adaptive streaming and variants) has complicated the task for DPIs. The transport protocol remains the same as general web traffic, but the behaviour is quite different. As we have seen many times in this blog, video traffic must be measured in different manner from generic data traffic, if policy enforcement is to be implemented. All packets are not created equal.
Herein lies the difficulty. To implement intelligent, sophisticated traffic management rules today, you need to be able handle video. To handle video, you need to recognize it (not infer or assume), and measure it. To recognize and measure it, you need to decode it. This is one of the reasons why Allot bought Ortiva Wireless in 2012, Procera partnered with Skyfire and ByteMobile upgraded their video inspection to full fledged DPI more recently. We will see more generic traffic management vendors (PCRF, PCEF, DPI...) partner and acquire video transcoding companies.
Deep packet inspection platforms have evolved from a static rules-based filtering engine to a sophisticated enforcement point allowing packet and protocol classification, prioritization and shaping. Ubiquitous in enterprises and telco networks, they are the jack-of-all-trade of traffic management, allowing such a diverse set of use cases as policy enforcement, adult content filtering, lawful interception, QoS management, peer-to-peer throttling or interdiction, etc...
DPIs rely first on a robust classification engine. It snoops through data traffic and classifies each packet based on port, protocol, interface, origin, destination, etc... The more sophisticated engines go beyond layer 3 and are able to recognize classes of traffic using headers. This classification engine is sufficient for most traffic type inspection, from web browsing to email, from VoIP to video conferencing or peer-to-peer sharing.
The premise, here is that if you can recognize, classify, tag traffic accurately, then you can apply rules governing the delivery of this traffic, ranging from interdiction to authorization, with many variants of shaping in between.
DPI falls short in many cases when it comes to video streaming. Until 2008 or so, most video streaming was relying on specialized protocols such as RTSP. The classification was easy, as the videos were all encapsulated in a specific protocol, allowing instantiation and enforcement of rules in pretty straightforward manner. The emergence and predominance of HTTP based streaming video (progressive download, adaptive streaming and variants) has complicated the task for DPIs. The transport protocol remains the same as general web traffic, but the behaviour is quite different. As we have seen many times in this blog, video traffic must be measured in different manner from generic data traffic, if policy enforcement is to be implemented. All packets are not created equal.
- The first challenge is to recognise that a packet is video. DPIs generally infer the nature of the HTTP packet based on its origin/destination. For instance, they can see that the traffic's origin is YouTube, they can therefore assume that it is video. This is insufficient, not all YouTube traffic is video streaming (when you browse between pages, when you read or post comments, when you upload a video, when you like or dislike...). Applying video rules to browsing traffic or vice versa can have adverse consequences on the user experience.
- The second challenge is policy enforcement. The main tool in DPI arsenal for traffic shaping is setting the delivery bit rate for a specific class of traffic. As we have seen, videos come in many definition (4k, HD, SD, QCIF...), many containers and many formats, resulting in a variety of different encoding bit rate. If you want to shape your video traffic, it is crucial that you know all these elements and the encoding bit rate, because if traffic is throttled below the encoding, rate, then the video stalls and buffers or times out. It is not reasonable to have a one-size-fits-all policy for video (unless it is to forbid usage). In order to extract the video-specific attributes of a session, you need to decode it, which requires in-line transcoding capabilities, even if you do not intend to modify that video.
Herein lies the difficulty. To implement intelligent, sophisticated traffic management rules today, you need to be able handle video. To handle video, you need to recognize it (not infer or assume), and measure it. To recognize and measure it, you need to decode it. This is one of the reasons why Allot bought Ortiva Wireless in 2012, Procera partnered with Skyfire and ByteMobile upgraded their video inspection to full fledged DPI more recently. We will see more generic traffic management vendors (PCRF, PCEF, DPI...) partner and acquire video transcoding companies.
Thursday, September 26, 2013
LTE Asia: transition from technology to value... or die
I am just back from LTE Asia in Singapore, where I chaired the track on Network Optimization. The show was well attended with over 900 people by Informa's estimate.
Once again, I am a bit surprised and disappointed by the gap between operators and vendors' discourse.
By and large, operators who came (SK, KDDI, KT, Chungwha, HKCSL, Telkomsel, Indosat to name but a few) had excellent presentations on their past successes and current challenges, highlighting the need for new revenue models, a new content (particularly video) value chain and better customer engagement.
Vendors of all stripes seem to consistently miss the message and try to push technology when their customer need value. I appreciate that the transition is difficult and as I was reflecting with a vendor's executive at the show, selling technology feels somewhat safer and easier than value.
But, as many operators are finding out in their home turf, their consumers do not care much about technology any more. It is about brand, service, image and value that OTT service providers are winning consumers mind share. Here lies the risk and opportunity. Operators need help to evolve and re invent the mobile value chain.
The value proposition of vendors must evolve towards solutions such as intelligent roaming, 2-way business models with content providers, service type prioritization (messaging, social, video, entertainment, sports...), bundling and charging...
At the heart of this necessary revolution is something that makes many uneasy. DPI and traffic classification, relying on ports and protocols is the basis of today's traffic management and is becoming rapidly obsolete. A new generation of traffic management engines is needed. The ability to recognize content and service types at a granular level is key. How can the mobile industry can evolve in the OTT world if operators are not able to recognize a content that is user-generated vs. Hollywood? How can operators monetize video if they cannot detect, recognize, prioritize, assure advertising content?
Operators have some key assets, though. Last mile delivery, accurate customer demographics, billing relationship and location must be leveraged. YouTube knows whether you are on iPad or laptop but not necessarily whether your cellular interface is 3G, HSPA, LTE... they certainly can't see whether a user's poor connection is the result of network congestion, spectrum interference, distance from the cell tower or throttling because the user exceeds its data allowance... There is value there, if operators are ready to transform themselves and their organization to harvest and sell value, not access...
Opportunities are many. Vendors who continue to sell SIP, IMS, VoLTE, Diameter and their next generation hip equivalent LTE Adavanced, 5G, cloud, NFV... will miss the point. None of these are of interest for the consumer. Even if the operator insist on buying or talking about technology, services and value will be key to success... unless you are planning to be an M2M operator, but that is a story for another time.
Once again, I am a bit surprised and disappointed by the gap between operators and vendors' discourse.
By and large, operators who came (SK, KDDI, KT, Chungwha, HKCSL, Telkomsel, Indosat to name but a few) had excellent presentations on their past successes and current challenges, highlighting the need for new revenue models, a new content (particularly video) value chain and better customer engagement.
Vendors of all stripes seem to consistently miss the message and try to push technology when their customer need value. I appreciate that the transition is difficult and as I was reflecting with a vendor's executive at the show, selling technology feels somewhat safer and easier than value.
But, as many operators are finding out in their home turf, their consumers do not care much about technology any more. It is about brand, service, image and value that OTT service providers are winning consumers mind share. Here lies the risk and opportunity. Operators need help to evolve and re invent the mobile value chain.
The value proposition of vendors must evolve towards solutions such as intelligent roaming, 2-way business models with content providers, service type prioritization (messaging, social, video, entertainment, sports...), bundling and charging...
At the heart of this necessary revolution is something that makes many uneasy. DPI and traffic classification, relying on ports and protocols is the basis of today's traffic management and is becoming rapidly obsolete. A new generation of traffic management engines is needed. The ability to recognize content and service types at a granular level is key. How can the mobile industry can evolve in the OTT world if operators are not able to recognize a content that is user-generated vs. Hollywood? How can operators monetize video if they cannot detect, recognize, prioritize, assure advertising content?
Operators have some key assets, though. Last mile delivery, accurate customer demographics, billing relationship and location must be leveraged. YouTube knows whether you are on iPad or laptop but not necessarily whether your cellular interface is 3G, HSPA, LTE... they certainly can't see whether a user's poor connection is the result of network congestion, spectrum interference, distance from the cell tower or throttling because the user exceeds its data allowance... There is value there, if operators are ready to transform themselves and their organization to harvest and sell value, not access...
Opportunities are many. Vendors who continue to sell SIP, IMS, VoLTE, Diameter and their next generation hip equivalent LTE Adavanced, 5G, cloud, NFV... will miss the point. None of these are of interest for the consumer. Even if the operator insist on buying or talking about technology, services and value will be key to success... unless you are planning to be an M2M operator, but that is a story for another time.
Tuesday, July 31, 2012
Allot continues its spending spree
Oversi Networks is a provider of transparent caching solutions for OTT and P2P traffic. Specifically, Oversi has been developing a purpose-built video cache, one of the first of its kind.
Many vendors in the space have caches that have been built on open source general-purpose web caches, originally to manage offline video optimization scenarios (for those not able to transcode mp4, flv/f4v containers in real time). As the long tail of video content unfolds, social media and virality create snowballing effects on some video content and a generic web cache shows limitations when it comes to efficiently cache video.
The benefits of a hierarchical, video specific cache then becomes clear. Since video nowadays come in many formats, containers, across many protocols and since content providers repost the same video with different attributes, titles, URLs, duration...etc, it is quite inefficient to cache video only based on metadata recognition. Some level of media inspection is necessary to ascertain what the video is and whether it really corresponds to the metadata.
All in all, another smart acquisition by Allot. On the paper, it certainly strengthens the company position, with technologies compatible and complementary with their legacy portfolio and the recent Ortiva's acquisition. It will be interesting to see how Allot's product portfolio evolves over time and how the different product lines start to synergize.
Wednesday, April 11, 2012
Policy driven optimization
The video optimization market is still young, but with over 80 mobile networks deployed globally, I am officially transitioning it from emerging to growth phase in the technology life cycle matrix.
Mobile world congress brought many news in that segment, from new entrants, to networks announcements, technology launches and new partnerships. I think one of the most interesting trend is in the policy and charging management for video.
Operators understand that charging models based on pure data consumption are doomed to be hard to understand for users and to be potentially either extremely inefficient or expensive. In a world where a new iPad can consume a subscriber's data plan in a matter of hours, while the same subscriber could be watching 4 to 8 times the same amount of video on a different device, the one-size-fits-all data plan is a dangerous proposition.
While the tool set to address the issue is essentially in place, with intelligent GGSNs, EPCs, DPIs, PCRFs and video delivery and optimization engine, this collection of devices were mostly managing their portion of traffic in a very disorganized fashion. Access control at the radio and transport layer segregated from protocol and application, accounting separated from authorization and charging...
Policy control is the technology designed to unify them and since this market's inception, has been doing a good job of coordinating access control, accounting, charging, rating and permissions management for voice and data.
What about video?
The diameter Gx interface is extensible, as a semantics to convey traffic observations and decisions between one or several policy decision points and policy enforcement points. The standards allows for complex iterative challenges between end points to ascertain a session's user, its permissions and balance as he uses cellular services.
Video was not a dominant part of the traffic when the policy frameworks were put in place, and not surprisingly, the first generation PCRFs and video optimization deployments were completely independent. Rules had to be provisioned and maintained in separate systems, because the PCRF was not video aware and the video optimization platforms were not policy aware.
This led to many issues, ranging from poor experience (DPI instructed to throttle traffic below the encoding rate of a video), bill shock (ill-informed users blow past their data allowance) to revenue leakage (poorly designed charging models not able to segregate the different HTTP traffic).
The next generation networks see a much tighter integration between policy decision and policy enforcement for the delivery of video in mobile networks. Many vendors in both segments collaborate and have moved past the pure interoperability testing to deployments in commercial networks. Unfortunately, we have not seen many proof points of these integration yet. Mostly, it is due to the fact that this is an emerging area. Operators are still trying to find the right recipe for video charging. Standards do not offer guidance for specific video-related policies. Vendors have to rely on two-ways (proprietary?) implementations.
Lately, we have seen the leaders in policy management and video optimization collaborate much closer to offer solutions in this space. In some cases, as the result of being deployed in the same networks and being "forced" to integrate gracefully, in many cases, because the market enters a new stage of maturation. As you well know, I have been advocating a closer collaboration between DPI, policy management and video optimization for a while (here, here and here for instance). I think these are signs of market maturation that will accelerate concentration in that space. There are more and more rumors of video optimization vendors getting closer to mature policy vendors. It is a logical conclusion for operators to get a better integrated traffic management and charging management ecosystem centered around video going forward. I am looking forward to discussing these topics and more at Policy Control 2012 in Amsterdam, April 24-25.
Mobile world congress brought many news in that segment, from new entrants, to networks announcements, technology launches and new partnerships. I think one of the most interesting trend is in the policy and charging management for video.
Operators understand that charging models based on pure data consumption are doomed to be hard to understand for users and to be potentially either extremely inefficient or expensive. In a world where a new iPad can consume a subscriber's data plan in a matter of hours, while the same subscriber could be watching 4 to 8 times the same amount of video on a different device, the one-size-fits-all data plan is a dangerous proposition.
While the tool set to address the issue is essentially in place, with intelligent GGSNs, EPCs, DPIs, PCRFs and video delivery and optimization engine, this collection of devices were mostly managing their portion of traffic in a very disorganized fashion. Access control at the radio and transport layer segregated from protocol and application, accounting separated from authorization and charging...
Policy control is the technology designed to unify them and since this market's inception, has been doing a good job of coordinating access control, accounting, charging, rating and permissions management for voice and data.
What about video?
The diameter Gx interface is extensible, as a semantics to convey traffic observations and decisions between one or several policy decision points and policy enforcement points. The standards allows for complex iterative challenges between end points to ascertain a session's user, its permissions and balance as he uses cellular services.
Video was not a dominant part of the traffic when the policy frameworks were put in place, and not surprisingly, the first generation PCRFs and video optimization deployments were completely independent. Rules had to be provisioned and maintained in separate systems, because the PCRF was not video aware and the video optimization platforms were not policy aware.
This led to many issues, ranging from poor experience (DPI instructed to throttle traffic below the encoding rate of a video), bill shock (ill-informed users blow past their data allowance) to revenue leakage (poorly designed charging models not able to segregate the different HTTP traffic).
The next generation networks see a much tighter integration between policy decision and policy enforcement for the delivery of video in mobile networks. Many vendors in both segments collaborate and have moved past the pure interoperability testing to deployments in commercial networks. Unfortunately, we have not seen many proof points of these integration yet. Mostly, it is due to the fact that this is an emerging area. Operators are still trying to find the right recipe for video charging. Standards do not offer guidance for specific video-related policies. Vendors have to rely on two-ways (proprietary?) implementations.
Lately, we have seen the leaders in policy management and video optimization collaborate much closer to offer solutions in this space. In some cases, as the result of being deployed in the same networks and being "forced" to integrate gracefully, in many cases, because the market enters a new stage of maturation. As you well know, I have been advocating a closer collaboration between DPI, policy management and video optimization for a while (here, here and here for instance). I think these are signs of market maturation that will accelerate concentration in that space. There are more and more rumors of video optimization vendors getting closer to mature policy vendors. It is a logical conclusion for operators to get a better integrated traffic management and charging management ecosystem centered around video going forward. I am looking forward to discussing these topics and more at Policy Control 2012 in Amsterdam, April 24-25.
Labels:
Authentication,
content based charging,
cost containment,
data cap,
DPI,
interoperability,
mobile broadband,
Monetization,
Openet,
OTT,
PCRF,
Sandvine,
traffic management,
Video delivery
Wednesday, December 21, 2011
Allot to acquire Flash Networks for $110 /$120 M?
This is the latest rumor from Globe. Allot, who has raised almost $80M a month ago and was rumored to be acquired by F5, then to discuss acquisition of Mobixell or PeerApp last year, has a $500M market cap. Flash Networks has raised over $61M.
The resulting company could be booking about $120M in sales and be profitable.
Allot, in a briefing with Jonathon Gordon, Director of Marketing, two weeks ago was noting: " Our policies focus more and more on revenue generation. With over 100 charging plans surveyed in our latest report, we see more and more demand for bundle plans for social networks and video. We can already discriminate traffic that is embedded, for instance, we can see that a user is watching a video within a facebook browsing session, but we cannot recognize and analyse the video in term of format, bit rate, etc...Premium video specific policies raise a lot of interest these days."
No doubt, the acquisition of an optimization vendor like Flash Networks can solve that problem, by creating a harmonious policy and charging function that actually manages video, which accounts for over half of 2011 mobile traffic globally.
As discussed here and here, video optimization becomes an attractive target for telco vendors who want to extend beyond DPI and policy. Since video is such a specialized skill, it is likely that growth in this area will not be organic. It is likely that the browsing gateway / DPI / PCRF / Optimization segments will collapse over the next 2 years, as they are atomized markets, with small, technology-driven under-capitalized companies and medium -to-large mature companies looking to increase market share or grow the top line.
The resulting company could be booking about $120M in sales and be profitable.
Allot, in a briefing with Jonathon Gordon, Director of Marketing, two weeks ago was noting: " Our policies focus more and more on revenue generation. With over 100 charging plans surveyed in our latest report, we see more and more demand for bundle plans for social networks and video. We can already discriminate traffic that is embedded, for instance, we can see that a user is watching a video within a facebook browsing session, but we cannot recognize and analyse the video in term of format, bit rate, etc...Premium video specific policies raise a lot of interest these days."
No doubt, the acquisition of an optimization vendor like Flash Networks can solve that problem, by creating a harmonious policy and charging function that actually manages video, which accounts for over half of 2011 mobile traffic globally.
As discussed here and here, video optimization becomes an attractive target for telco vendors who want to extend beyond DPI and policy. Since video is such a specialized skill, it is likely that growth in this area will not be organic. It is likely that the browsing gateway / DPI / PCRF / Optimization segments will collapse over the next 2 years, as they are atomized markets, with small, technology-driven under-capitalized companies and medium -to-large mature companies looking to increase market share or grow the top line.
Thursday, September 15, 2011
Openet's Intelligent Video Management Solution
As you well know, I have been advocating closer collaboration between DPI, policy management and video optimization for a while (here and here for instance).
In my mind, most carriers have had to deal in majority with transactional traffic in data until video came along. There are some fundamental differences between managing transactional and flow-based data traffic.The quality of experience of a video service depends as much from the intrinsic quality of the video than the way that video is being delivered.
In a mobile network, with a daisy chain of proxies and gateways (GGSN, DPI, browsing gateway, video optimization engine, caching systems...), the user experience of a streamed video is only going to be as good as the lowest common denominator of that delivery chain.
Gary Rieschick, Director – Wireless and Broadband Solutions at Openet spoke with me today about the Intelligent Video Management Solution launched this week.
The main solution features highlighted by Gary are below:
I think it is a good step taken by Openet to take some thought leadership in this space. Operators need help to create a carefully orchestrated delivery chain for video.
While Openet's solution might work well with a few vendors, i think though, that a real industry effort in standardization is necessary to provide video specific extensions to Gx policy interface.
Delivering and optimizing video in a wireless network results in destructive user experience whenever the control plane enabling feedback on congestion, original video quality, resulting video quality, device and network capabilities is not shared across all policy enforcement and policy decision points.
In my mind, most carriers have had to deal in majority with transactional traffic in data until video came along. There are some fundamental differences between managing transactional and flow-based data traffic.The quality of experience of a video service depends as much from the intrinsic quality of the video than the way that video is being delivered.
In a mobile network, with a daisy chain of proxies and gateways (GGSN, DPI, browsing gateway, video optimization engine, caching systems...), the user experience of a streamed video is only going to be as good as the lowest common denominator of that delivery chain.
Gary Rieschick, Director – Wireless and Broadband Solutions at Openet spoke with me today about the Intelligent Video Management Solution launched this week.
"Essentially, as operators are investing in video optimization solutions, they have been asking how to manage video delivery across separate enforcement points. Some vendors are supporting Gx, other are supporting proprietary extensions or proprietary protocols. Some of these vendors have created quality of experience metrics as well, that are used locally, for static rule based video optimization."Openet has been working with two vendors in the video optimization space to try and harmonize video optimization methods with policy management. For instance, depending on the resulting quality of a video after optimization, the PCRF could decide to zero rate that video if the quality was below a certain threshold.
The main solution features highlighted by Gary are below:
- Detection of premium content: The PCRF can be aware of agreements between the content provider and operator and provisioned with rules to prioritize or provide better quality to certain content properties.
- Content prioritization: based on time of day, congestion detection
- Synchronization of rules across policy enforcement points to ensure for instance that the throttling engine at the DPI level and at the video optimization engine level do not clash.
- Next hop routing, where the PCRF can instruct the DPI to toute the traffic within the operator network based on what the traffic is (video, mail, P2P...)
- Dynamic policies to supplement and replace static rules provision in video optimization engine to be reactive to network congestion indications, subscriber profile, etc...
I think it is a good step taken by Openet to take some thought leadership in this space. Operators need help to create a carefully orchestrated delivery chain for video.
While Openet's solution might work well with a few vendors, i think though, that a real industry effort in standardization is necessary to provide video specific extensions to Gx policy interface.
Delivering and optimizing video in a wireless network results in destructive user experience whenever the control plane enabling feedback on congestion, original video quality, resulting video quality, device and network capabilities is not shared across all policy enforcement and policy decision points.
Thursday, September 1, 2011
Bytemobile T 3000 series & Unison update
Bytemobile released this week a new platform (T3000) and a new product (T3100).
With more than 40 operator deployments, Bytemobile is the leader in the video optimization market. The new platform is launched to allow Bytemobile to address the intelligent traffic management market .
Mikko Disini, in charge of the new T 3OOO series and Unison platforms discussed with me the rationale behind the introduction of the new product and how it complements Unison.
T3000 series has been created in an effort to provide more monetization options for mobile broadband operators. For those familiar with Unison, which is essentially a web and video proxy and optimization gateway, T3100 expands beyond browsing to proxy and manipulate all traffic, including UDP based applications, P2P, video chat, RTSP, etc..
While Unison remains a software based solution, on off-the-shelf IBM blade center, T3000 series is a purpose built IBM hardware based appliance. T3100 combines load balancing, DPI, PCEF and traffic rules in one package. Bytemobile is planning to introduce new products on the T3000 platform in the future.
Mikko commented that the rationale behind the hardware based approach is to be more channel-friendly. " It is easier to deploy, package, explain, it is an easier sale".
My opinion is that Bytemobile makes a smart move to expand their product portfolio with new verticals. While there is a large level of overlap between Unison and T3100 today, Bytemobile can upsell their installed base with purpose-built solutions. While in the past, Unison was a Swiss Army knife, for a market who was looking for a quick solution, that had a bit of everything, the growth of the traffic is forcing many vendors to separate applications to have more granular scalability.
With T3000, Bytemobile moves more decidedly into the DPI, load balancing, PCEF space than with Unison. Additionally, moving to a hardware appliance model is going to enable them further to resist price erosion, reusing the Unison tactics of bundling several applications and features together with different market prices and models.
What remains to be seen is how effective the strategy is going to be in acquiring new channels, beyond IBM, NSN and TechMahindra now that T3000 is sure to encroach on some bigger players such as F5 and Cisco... or maybe, this is the strategy?
With more than 40 operator deployments, Bytemobile is the leader in the video optimization market. The new platform is launched to allow Bytemobile to address the intelligent traffic management market .
Mikko Disini, in charge of the new T 3OOO series and Unison platforms discussed with me the rationale behind the introduction of the new product and how it complements Unison.
T3000 series has been created in an effort to provide more monetization options for mobile broadband operators. For those familiar with Unison, which is essentially a web and video proxy and optimization gateway, T3100 expands beyond browsing to proxy and manipulate all traffic, including UDP based applications, P2P, video chat, RTSP, etc..
While Unison remains a software based solution, on off-the-shelf IBM blade center, T3000 series is a purpose built IBM hardware based appliance. T3100 combines load balancing, DPI, PCEF and traffic rules in one package. Bytemobile is planning to introduce new products on the T3000 platform in the future.
Mikko commented that the rationale behind the hardware based approach is to be more channel-friendly. " It is easier to deploy, package, explain, it is an easier sale".
My opinion is that Bytemobile makes a smart move to expand their product portfolio with new verticals. While there is a large level of overlap between Unison and T3100 today, Bytemobile can upsell their installed base with purpose-built solutions. While in the past, Unison was a Swiss Army knife, for a market who was looking for a quick solution, that had a bit of everything, the growth of the traffic is forcing many vendors to separate applications to have more granular scalability.
With T3000, Bytemobile moves more decidedly into the DPI, load balancing, PCEF space than with Unison. Additionally, moving to a hardware appliance model is going to enable them further to resist price erosion, reusing the Unison tactics of bundling several applications and features together with different market prices and models.
What remains to be seen is how effective the strategy is going to be in acquiring new channels, beyond IBM, NSN and TechMahindra now that T3000 is sure to encroach on some bigger players such as F5 and Cisco... or maybe, this is the strategy?
Labels:
adaptive streaming,
browsing,
Bytemobile,
caching,
content based charging,
DPI,
load balancer,
mobile broadband,
Monetization,
off-deck,
on-deck,
P2P,
PCRF,
traffic management,
video optimization
Subscribe to:
Posts (Atom)