Showing posts with label policy enforcement. Show all posts
Showing posts with label policy enforcement. Show all posts

Thursday, August 8, 2024

The journey to automated and autonomous networks

 

The TM Forum has been instrumental in defining the journey towards automation and autonomous telco networks. 

As telco revenues from consumers continue to decline and the 5G promise to create connectivity products that enterprises, governments and large organizations will be able to discover, program and consume remains elusive, telecom operators are under tremendous pressure to maintain profitability.

The network evolution started with Software Defined Networks, Network Functions Virtualization and more recently Cloud Native evolution aims to deliver network programmability for the creation of innovative on-demand connectivity services. Many of these services require deterministic connectivity parameters in terms of availability, bandwidth, latency, which necessitate end to end cloud native fabric and separation of control and data plane. A centralized control of the cloud native functions allow to abstract resource and allocate them on demand as topology and demand evolve.

A benefit of a cloud native network is that, as software becomes more open and standardized in a multi vendor environment, many tasks that were either manual or relied on proprietary interfaces can now be automated at scale. As layers of software expose interfaces and APIs that can be discovered and managed by sophisticated orchestration systems, the network can evolve from manual, to assisted, to automated, to autonomous functions.


TM Forum defines 5 evolution stages from full manual operation to full autonomous networks.

  • Condition 0 - Manual operation and maintenance: The system delivers assisted monitoring capabilities, but all dynamic tasks must be 0 executed manually
  • Step 1 - Assisted operations and maintenance: The system executes a specific, repetitive subtask based on pre-configuration, which can be recorded online and traced, in order to increase execution efficiency.
  • Step 2: - Partial autonomous network: The system enables closed-loop operations and maintenance for specific units under certain external environments via statically configured rules.
  • Step 3 - Conditional autonomous network: The system senses real-time environmental changes and in certain network domains will optimize and adjust itself to the external environment to enable, closed-loop management via dynamically programmable policies.
  • Step 4 - Highly autonomous network: In a more complicated cross-domain environment, the system enables decision-making based on predictive analysis or active closed-loop management of service-driven and customer experience-driven networks via AI modeling and continuous learning.
  • Step 5 - Fully autonomous network: The system has closed-loop automation capabilities across multiple services, multiple domains (including partners’ domains) and the entire lifecycle via cognitive self-adaptation.
After describing the framework and conditions for the first 3 steps, the TM Forum has recently published a white paper describing the Level 4 industry blueprints.

The stated goals of level 4 are to enable the creation and roll out of new services within 1 week with deterministic SLAs and the delivery of Network as a service. Furthermore, this level should allow fewer personnel to manage the network (1000's of person-year) while reducing energy consumption and improving service availability.

These are certainly very ambitious objectives. The paper goes on to describe "high value scenarios" to guide level 4 development. This is where we start to see cognitive dissonance creeping in between the stated objectives and the methodology.  After all, much of what is described here exists today in cloud and enterprise environments and I wonder whether Telco is once again reinventing the wheel in trying to adapt / modify existing concepts and technologies that are already successful in other environments.

First, the creation of deterministic connectivity is not (only) the product of automation. Telco networks, in particular mobile networks are composed of a daisy chain of network elements that see customer traffic, signaling, data repository, look up, authentication, authorization, accounting, policy management functions being coordinated. On the mobile front, the signal effectiveness varies over time, as weather, power, demand, interferences, devices... impact the effective transmission. Furthermore, the load on the base station, the backhaul, the core network and the  internet peering point also vary over time and have an impact on its overall capacity. As you understand, creating a connectivity product with deterministic speed, latency capacity to enact Network as a Service requires a systemic approach. In a multi vendor environment, the RAN, the transport, the core must be virtualized, relying on solid fiber connectivity as much as possible to enable the capacity and speed. The low latency requires multiple computing points, all the way to the edge or on premise. The deterministic performance requires not only virtualization and orchestration of the RAN, but also the PON fiber and end to end slicing support and orchestration. This is something that I led at Telefonica with an open compute edge computing platform, a virtualized (XGS) PON on a ONF ONOS VOLTHA architecture with an open virtualized RAN. This was not automated yet, as most of these elements were advanced prototype at that stage, but the automation is the "easy" part once you have assembled the elements and operated them manually for enough time. The point here is that deterministic network performances is attainable but still a far objective for most operators and it is a necessary condition to enact NaaS, before even automation and autonomous networks.

Second, the high value scenarios described in the paper are all network-related. Ranging from network troubleshooting, to optimization and service assurance, these are all worthy objectives, but still do not feel "high value" in terms of creation of new services. While it is natural that automation first focuses on cost reduction for roll out, operation, maintenance, healing of network, one would have expected more ambitious "new services" description.

All in all, the vision is ambitious, but there is still much work to do in fleshing out the details and linking the promised benefits to concrete services beyond network optimization.

Monday, April 25, 2016

Mobile Edge Computing 2016 is released!



5G networks will bring extreme data speed and ultra low latency to enable Internet of Things, autonomous vehicles, augmented, mixed and virtual reality and countless new services.

Mobile Edge Computing is an important technology that will enable and accelerate key use cases while creating a collaborative framework for content providers, content delivery networks and network operators. 

Learn how mobile operators, CDNs, OTTs and vendors are redefining cellular access and services.

Mobile Edge Computing is a new ETSI standard that uses latest virtualization, small cell, SDN and NFV principles to push network functions, services and content all the way to the edge of the mobile network. 


This 70 pages report reviews in detail what Mobile Edge Computing is, who the main actors are and how this potential multi billion dollar technology can change how OTTs, operators, enterprises and machines can enable innovative and enhanced services.

Providing an in-depth analysis of the technology, the architecture, the vendors's strategies and 17 use cases, this first industry report outlines the technology potential and addressable market from a vendor, service provider and operator's perspective.

Table of contents, executive summary can be downloaded here.

Thursday, June 26, 2014

LTE World Summit 2014

This year's 10th edition of the conference, seems to have found a new level of maturity. While VoLTE, RCS, IMS are still subjects of interest, we seem to be past the hype at last (see last year), with a more pragmatic outlook towards implementation and monetization. 

I was happy to see that most operators are now recognizing the importance of managing video experience for monetization. Du UAE's VP of Marketing, Vikram Chadha seems to get it:
"We are transitioning our pricing strategy from bundles and metering to services. We are introducing email, social media, enterprise packages and are looking at separating video from data as a LTE monetization strategy."
As a result, the keynotes were more prosaic than in the past editions, focusing on cost of spectrum acquisitions and regulatory pressure in the European Union preventing operators to mount any defensible position against the OTT assault on their networks. Much of the agenda of the show focused on pragmatic subjects such as roaming, pricing, policy management, heterogeneous networks and wifi/cellular handover. Nothing obviously earth shattering on these subjects, but steady progress, as the technologies transition from lab to commercial trials and deployment. 

As an example, there was a great presentation by Bouygues Telecom's EVP of Strategy Frederic Ruciak highlighting the company's strategy for the launch of LTE in France, A very competitive market, and how the company was able to achieve the number one spot in LTE market share, despite being the "challenger" number 3 in 2 and 3G.

The next buzzword on the hype cycle to point its head is NFV with many operator CTOs publicly hailing the new technology as the magic bullet that will allow them to "launch services in days or weeks rather than years". I am getting quite tired of hearing that rationalization as an excuse for the multimillion investments made in this space, especially when no one seems to know what these new services will be. Right now, the only arguable benefit is on capex cost containment and I have seen little evidence that it will pass this stage in the mid term. Like the teenage sex joke, no one seems to know what it is, but everybody claims to be doing it. 
There is still much to be resolved on this matter and that discussion will continue for some time. The interesting new positioning I heard at the show is appliance vendors referring to their offering as PNF (as in physical) in contrast and as enablers for VNF. Although it sounds like a marketing trick, it makes a lot of sense for vendors to illustrate how NFV inserts itself in a legacy network, leading inevitably to a hybrid network architecture. 

The consensus here seems to be that there are two prevailing strategies for introduction of virtualized network functions. 

  1. The first one, "cap and grow" sees existing infrastructure equipments being capped beyond a certain capacity and little by little complemented by virtualized functions, allowing incremental traffic to find its way on the virtualized infrastructure. A variant might be "cap and burst" where a function subject to bursts traffic is dimensioned on physical assets to the mean peak traffic and all exceeding traffic is diverted to a virtualized function. 
  2. The second seems to favour the creation of vertical virtualized networks for market or traffic segments that are greenfield. M2M and VoLTE being the most cited examples. 

Both strategies have advantages and flaws that I am exploring in my upcoming report on "NFV & virtualization in mobile networks 2014". Contact me for more information.



Tuesday, April 22, 2014

Video monetization & optimization 2014 executive summary

As announced earlier this month, my latest report "Mobile video monetization and optimization 2014" is out.

In 2014, mobile video is a fact of life. It has taken nearly 5 years for the service to transition from novelty to a growing habit that is quickly becoming an everyday occurrence in mature markets. Nearly a quarter of YouTube and Netflix views nowadays are on a tablet or a smartphone. Of course, users predominantly still stream over wifi, but as LTE slowly progresses across markets, users start to take for granted the network capacity to deliver video.

Already, LTE networks start to show signs of weariness as video threatens the infrastructure and the business model of mobile content delivery.

On the regulatory front, with the US appeal court served in January ruling that the FCC had no authority to impose "Open Internet Order" (net neutrality) rules to broadband carriers, there is a wind of hope and fear that blows across the traffic management market.

Almost concurrently, we are seeing initiatives from network operators and OTT alike to find new footings for business models and cooperation / competition.
  •       AT&T is experimenting with sponsored data plans,
  •       Verizon has bought a CDN,
  •       Deutsche Telekom partners with Evernote and Spotify,
  •       Orange persists investigating Telco OTT with Libon,
  •       Uninor India wants to charge for Facebook,
  •       Netflix is trialing tiered pricing,
  •       Facebook and Google are hinting at operating wireless networks…

In the meantime, mobile advertising still hasn't delivered on the promises of taking advantage of a hyper targeted, location-aware, contextually relevant service. Privacy concerns are at their highest, with the fires started by Wikileaks and Edward Snowden’ NSA scandals, fanned by “free internet” activists and a misinformed public.

Quality of Experience is a growing trend, from measurement to management and experience assurance is starting to make its appearance, buoyed by a series of vague announcements and launches in the analytics, big data, and network virtualization field.

Legacy (already?!) video optimization vendors see the emergence of smarter, more cost-effective and policy-driven platforms. The technology has not delivered fully on cost reduction, but is being implemented for media inspection, analytics, media policy enforcement and control and lately video centric pricing models and bundles.

With the acquisition of the market leader last year and the merger of the number 2 and 3 in market share at the beginning of this year, we have seen video optimization trials and RFx being delayed in their decision making.

Video optimization in 2014 is a mature market segment. The technology has been deployed in over 200 networks globally.


{Core Analysis} believe that video optimization will continue to be deployed in most networks as a media policy enforcement point and for media analytics.

Monday, January 20, 2014

All packets are not created equal: why DPI and policy vendors look at video encoding

As we are still contemplating the impact of last week's US ruling on net neutrality, I thought I would attempt today to settle a question I often get in my workshops. Why is DPI insufficient when it comes to video policy enforcement?

Deep packet inspection platforms have evolved from a static rules-based filtering engine to a sophisticated enforcement point allowing packet and protocol classification, prioritization and shaping. Ubiquitous in enterprises and telco networks, they are the jack-of-all-trade of traffic management, allowing such a diverse set of use cases as policy enforcement, adult content filtering, lawful interception, QoS management, peer-to-peer throttling or interdiction, etc...
DPIs rely first on a robust classification engine. It snoops through data traffic and classifies each packet based on port, protocol, interface, origin, destination, etc... The more sophisticated engines go beyond layer 3 and are able to recognize classes of traffic using headers. This classification engine is sufficient for most traffic type inspection, from web browsing to email, from VoIP to video conferencing or peer-to-peer sharing.
The premise, here is that if you can recognize, classify, tag traffic accurately, then you can apply rules governing the delivery of this traffic, ranging from interdiction to authorization, with many variants of shaping in between.

DPI falls short in many cases when it comes to video streaming. Until 2008 or so, most video streaming was relying on specialized protocols such as RTSP. The classification was easy, as the videos were all encapsulated in a specific protocol, allowing instantiation and enforcement of rules in pretty straightforward manner. The emergence and predominance of HTTP based streaming video (progressive download, adaptive streaming and variants) has complicated the task for DPIs. The transport protocol remains the same as general web traffic, but the behaviour is quite different. As we have seen many times in this blog, video traffic must be measured in different manner from generic data traffic, if policy enforcement is to be implemented. All packets are not created equal.


  • The first challenge is to recognise that a packet is video. DPIs generally infer the nature of the HTTP packet based on its origin/destination. For instance, they can see that the traffic's origin is YouTube, they can therefore assume that it is video. This is insufficient, not all YouTube traffic is video streaming (when you browse between pages, when you read or post comments, when you upload a video, when you like or dislike...). Applying video rules to browsing traffic or vice versa can have adverse consequences on the user experience.
  • The second challenge is policy enforcement. The main tool in DPI arsenal for traffic shaping is setting the delivery bit rate for a specific class of traffic. As we have seen, videos come in many definition (4k, HD, SD, QCIF...), many containers and many formats, resulting in a variety of different encoding bit rate. If you want to shape your video traffic, it is crucial that you know all these elements and the encoding bit rate, because if traffic is throttled below the encoding, rate, then the video stalls and buffers or times out. It is not reasonable to have a one-size-fits-all policy for video (unless it is to forbid usage). In order to extract the video-specific attributes of a session, you need to decode it, which requires in-line transcoding capabilities, even if you do not intend to modify that video.


Herein lies the difficulty. To implement intelligent, sophisticated traffic management rules today, you need to be able handle video. To handle video, you need to recognize it (not infer or assume), and measure it. To recognize and measure it, you need to decode it. This is one of the reasons why Allot bought Ortiva Wireless in 2012Procera partnered with Skyfire and ByteMobile upgraded their video inspection to full fledged DPI more recently. We will see more generic traffic management vendors (PCRF, PCEF, DPI...) partner and acquire video transcoding companies.