Showing posts with label analytics. Show all posts
Showing posts with label analytics. Show all posts

Thursday, July 31, 2025

The Orchestrator Conundrum strikes again: Open RAN vs AI-RAN

10 years ago (?!) I wrote about the overlaps and potential conflicts of the different orchestration efforts between SDN and NFV. Essentially, observing that, ideally, it is desirable to orchestrate network resources with awareness of services and that service and resource orchestration should have hierarchical and prioritized interactions, so that a service deployment and lifecycle is managed within resource capacity and when that capacity fluctuates, priorities can be enforced.

Service orchestrators have not really been able to be successfully deployed at scale for a variety a reasons, but primarily due to the fact that this control point was identified early on as a strategic effort for network operators and traditional network vendors. A few network operators attempted to create an open source orchestration model (Open Source MANO), while traditional telco equipment vendors developed their own versions and refused to integrate their network functions with the competition. In the end, most of the actual implementation focused on Virtual Infrastructure Management (VIM) and vertical VNF management, while orchestration remained fairly proprietary per vendor. Ultimately, Cloud Native Network Functions appeared and were deployed in Kubernetes inheriting its native resource management and orchestration capabilities.

In the last couple of years, Open RAN has attempted to collapse RAN Element Management Systems (EMS), Self Organizing Networks (SON) and Operation Support Systems (OSS) with the concept of Service Management and Orchestration (SMO). Its aim is to ostensibly provide a control platform for RAN infrastructure and services in a multivendor environment. The non real time RAN Intelligent Controller (RIC) is one of its main artefacts, allowing the deployment of rApps designed to visualize, troubleshoot, provision, manage, optimize and predict RAN resources, capacity and capabilities.

This time around, the concept of SMO has gained substantial ground, mainly due to the fact that the leading traditional telco equipment manufacturers were not OSS / SON leaders and that Orchestration was an easy target for non RAN vendors wanting to find a greenfield opportunity. 

As we have seen, whether for MANO or SMO, the barriers to adoption weren't really technical but rather economic-commercial as leading vendors were trying to protect their business while growing into adjacent areas.

Recently, AI-RAN as emerged as an interesting initiative, positing that RAN compute would evolve from specialized, proprietary and closed to generic, open and disaggregated. Specifically, RAN compute could see an evolution, from specialized silicon to GPU. GPUs are able to handle the complex calculations necessary to manage a RAN workload, with spare capacity. Their cost, however, greatly outweighs their utility if used exclusively for RAN. Since GPUs are used in all sorts of high compute environments to facilitate Machine Learning, Artificial Intelligence, Large and Small Language Models, Models Training and inference, the idea emerged that if RAN deploys open generic compute, it could be used both for RAN workloads (AI for RAN), as well as workloads to optimize the RAN (AI on RAN and ultimately AI/ML workloads completely unrelated to RAN (AI and RAN).

While this could theoretically solve the business case of deploying costly GPUs in hundreds of thousands of cell site, provided that the compute idle capacity could be resold as GPUaaS or AIaaS, this poses new challenges from a service / infrastructure orchestration standpoint. AI RAN alliance is faced with understanding orchestration challenges between resources and AI workloads

In an open RAN environment. Near real time and non real time RICs deploy x and r Apps. The orchestration of the apps, services and resources is managed by the SMO. While not all App could be categorized as "AI", it is likely that SMO will take responsibility for AI for and on RAN orchestration. If AI and RAN requires its own orchestration beyond K8, it is unlikely that it will be in isolation from the SMO.

From my perspective, I believe that the multiple orchestration, policy management and enforcement points will not allow a multi vendor environment for the control plane. Architecture and interfaces are still in flux, specialty vendors will have trouble imposing their perspective without control of the end to end architecture. As a result, it is likely that the same vendor will provide SMO, non real time RIC and AI RAN orchestration functions (you know my feelings about near real time RIC)

If you make the Venn diagram of vendors providing / investing in all three, you will have a good idea of the direction the implementation will take.

Monday, December 21, 2015

Bytemobile: what's next?

Following the brutal announcement of Bytemobile's product line discontinuation by Citrix, things are starting to get a little clearer in term of what the potential next steps could be for their customers.

Citrix was market leader in terms of number of deployments and revenue in the video optimization market when it decided to kill this product offering due to internal strategic realignment. The news left many customers confused as to what - if any- support they can expect from the company.

Citrix' first order of action over the last month has been to meet with every major account to reassure them that the transition will follow a plan. What transpires at this point in time is that a few features from ByteMobile T-3100 product family will be migrated to NetScaler probably towards the end of 2016. Citrix is still in the process of circling the wagons at this stage and seems to be trying to evaluate the business case for the transition, which will condition the amount of feature and the capacity to reach feature parity.

In many cases, network operators who have deployed versions of ByteMobile T-3100 have been put on notice to upgrade to the latest version, as older versions will see end of support notices going out next year.

Concurrently, presumably, Citrix won't be able to confirm NetScaler's detailed roadmap and transition plan until they have a better idea in term of the number and type of customers that will elect to migrate.

In the meantime, ByteMobile's historical competitors are drawing battle plans to take advantage of this opportunity. A forklift upgrade is never an easy task to negotiate and, no doubt, there will be much pencil sharpening in the new year in core networks procurement departments.

Video optimization market has dramatically changed over the last year. The growth in encrypted traffic, the uncertainty surrounding Citrix and the net neutrality debate has change the feature set operators have been looking for.
Real-time transcoding orders have severely reduced because of costs and encryption, while TCP optimization, encrypted traffic analytics, video advertising and adaptive bit rate management are gaining increasing favors.

The recent T-Mobile USA "Binge On" offering, providing managed video for premium services is also closely followed by many network operators and will in all likeliness create more interest for video management collaboration solutions.

As usual, this and more in my report on video monetization.

Monday, June 8, 2015

Data traffic optimization feature set

Data traffic optimization in wireless networks has reached a mature stage as a technology . The innovations that have marked the years 2008 – 2012 are now slowing down and most core vendors exhibit a fairly homogeneous feature set. 

The difference comes in the implementation of these features and can yield vastly different results, depending on whether vendors are using open source or purpose-built caching or transcoding engines and whether congestion detection is based on observed or deduced parameters.

Vendors tend nowadays to differentiate on QoE measurement / management, monetization strategies including content injection, recommendation and advertising.

Here is a list of commonly implemented optimization techniques in wireless networks.
  •  TCP optimization
    • Buffer bloat management
    • Round trip time management
  • Web optimization
    • GZIP
    •  JPEG / PNG… transcoding
    • Server-side JavaScript
    • White space / comments… removal
  • Lossless optimization
    • Throttling / pacing
    • Caching
    • Adaptive bit rate manipulation
    • Manifest mediation
    • Rate capping
  • Lossy optimization
    • Frame rate reduction
    • Transcoding
      • Online
      • Offline
      • Transrating
    • Contextual optimization
      • Dynamic bit rate adaptation
      • Device targeted optimization
      • Content targeted optimization
      • Rule base optimization
      • Policy driven optimization
      • Surgical optimization / Congestion avoidance
  • Congestion detection
    • TCP parameters based
    • RAN explicit indication
    • Probe based
    • Heuristics combination based
  • Encrypted traffic management
    • Encrypted traffic analytics
    • Throttling / pacing
    • Transparent proxy
    • Explicit proxy
  • QoE measurement
    • Web
      • page size
      • page load time (total)
      • page load time (first rendering)
    • Video
      • Temporal measurements
        • Time to start
        • Duration loading
        • Duration and number of buffering interruptions
        • Changes in adaptive bit rates
        • Quantization
        • Delivery MOS
      • Spatial measurements
        • Packet loss
        • Blockiness
        • Blurriness
        • PSNR / SSIM
        • Presentation MOS


An explanation of each technology and its feature set can be obtained as part of the mobile video monetization report series or individually as a feature report or in a workshop.

Tuesday, March 10, 2015

Mobile video 2015 executive summary

As is now traditional, I return from Mobile World Congress with a head full of ideas and views on market evolution, fueled by dozens of meetings and impromptu discussions. The 2015 mobile video monetization report, now in its fourth year, reflects the trends and my analysis of the mobile video market, its growth, opportunities and challenges.

Here is the executive summary from the report to be released this month.

2014 has been a contrasted year for deployments of video monetization platforms in mobile networks. The market in deployments and value has grown, but there has been an unease that has gripped some of its protagonists, forcing exits and pivot strategies, while players with new value proposition have emerged. This transition year is due to several factors.

On the growth front, we have seen the emergence of MVNOs and interconnect / clearing houses as a buying target, together with the natural turnover and replacement of now aging and fully amortized platforms deployed 5/6 years ago.

Additionally, the market leaders upgrade strategies have naturally also created some space for challengers and new entrants. Mature markets have seen mostly replacements and MVNO green field deployments, while emerging markets have added new units in markets that are either too early for 3G or already saturated in 4G. Volume growth has been particularly sustained in Eastern / Central Europe, North Africa, Middle East and South East Asia.

On the other hand, the emergence and growth of traffic encryption, coupled with persisting legal and regulatory threat surrounding the net neutrality debate has cooled down, delayed and in some cases shut down optimization projects as operators are trying to rethink their options. Western Europe and North America have seen a marked slowdown, while South America is just about starting to show interest.

The value of the deals has been in line with last year, after sharp erosions due to the competitive environment. The leading vendors have consolidated their approach, taken on new strategies and overall capitalizing on installed base, while many new deals have gone to new entrants and market challengers.

2014 has also been the first year of a commercial public cloud deployment, which should be followed soon by others. Network function virtualization has captivated many network operators’ imagination and science experiment budget, which has prompted the emergence of the notion of traffic classification and management as a service.

Video streaming, specifically, has shown great growth in 2014, consolidating its place as the fastest growing service in mobile networks and digital content altogether. 2014 and early 2015 have seen many acquisitions of video streaming, packaging, encoding technology company. What is new however, is that a good portion of these acquisitions were not performed by other technology companies but by OTT such as FaceBook and Twitter.

Mobile video advertising is starting to become a “thing” again, as investments, inventory and views show triple digit growth. The trend shows mobile video advertising becoming possibly the single largest revenue opportunity for mobile operators within a 5 years timeframe, but its implementation demands a change in attitude, organization, approach that is alien to most operators DNA. The transformation, akin to a heart transplant will probably leave many dead on the operating table before the graft takes on and the technique is refined, but they might not have much choice, looking at Google’ and Facebook’s announcements at Mobile World Congress 2015.

Will new technologies such as LTE Multicast, for instance, which are due to make their start in earnest this year, promising quality assured HD content, via streaming or download, be able to unlock the value chain? 


The mobile industry is embattled and find itself looking at some great threats to its business model, as the saying goes, those who will survive are not necessarily the strongest, but rather those who will adapt the fastest.

Wednesday, June 18, 2014

Are we ready for experience assurance? part II

Many vendors’ reporting capabilities are just fine when it comes to troubleshooting issues associated with connectivity or health of their system. Their capability to infer, beyond observation of their own system, the health of a connection or the network is oftentimes limited. 

Analytics, by definition require a large dataset that is ideally covering several systems and elements to provide correlation and pattern recognition on otherwise seemingly random events. With a complex environment like the mobile network, it is extremely difficult to understand what a user’s experience is on their phone. There are means to extrapolate and infer the state of a connection, a cell, a service by looking at network connections fluctuations. 

Traffic management vendors routinely report on the state of a session by measuring the TCP connection and its changes. Being able to associate with that session the device type, time of the day, location, service being used is good but a far cry from analytics.
Most systems will be able to detect if a connection went wrong and a user had a sub-par experience. Being able to tell why, is where analytics’ value is. Being able to prevent it is big data territory.
So what is experience assurance? How does (should) it work?

For instance, a client calls the call center to complain about a poor video experience. The video was sluggish to start with, started 7 seconds after pressing play and started buffering after 15 seconds of playback.
A DPI engine would be able to identify whether TCP and HTTP traffic were running efficiently at the time of the connection.
A probe in the RAN would be able to report a congestion event in a specific location.
A video reporting engine would be able to look at whether the definition and encoding of the video was compatible with the network speed at the time.
The media player in the device would be able to report whether there was enough resources locally to decode, buffer, process and play the video.
A video gateway should be able to detect the connection impairment in real time and to provide the means to correct or elegantly notify of the impending state of the video before the customer experiences a negative QoE.
A big data analytics platform should be able to point out that the poor experience is the result of a congestion in that cell that occurs nearly daily at the same time because the antenna serving that cell is in an area where there is a train station and every day the rush hour brings throngs of people connecting to that cell at roughly the same time.
An experience assurance framework would be able to provide feedback instruction to the policy framework, forcing download, emails and non-real-time data traffic to be delayed to account for short burst of video usage until the congestion passes. It should also allow to decide what the minimum level of quality should be for video and data traffic, in term of delivery, encoding speed, picture quality, start up time, etc… and proactively manage the video traffic to that target when the network “knows” that congestion is likely

Experience assurance is a concept that is making its debut when it comes to data and video services. To be effective, a proper solution should ideally be able to gather real time events from the RAN, the core, the content, the service provider and the device and to decide in real-time what is the nature of the potential impairment, what are the possible course of actions to reduce or negate the impairment or what are the means to notify the user of a sub-optimal experience. No single vendor, to my knowledge is able to achieve this use case, either on its own or through partnerships at this point in time. The technology vendors are too specialized, the elements involved in the delivery and management of data traffic too loosely integrated to offer real experience assurance at this point in time.

Vendors who want to provide experience assurance should first focus on the data. Most systems create event or call logs, registering hundreds of parameters every session, every second. Properly representing what is happening on the platform itself is quite difficult. It is an exercise in interpretation and representation of what is relevant and actionable and what is merely interesting. This is an exercise in small data. Understanding relevance and discriminating good data from over engineered logs is key.


A good experience assurance solution must rely on a strong detection, analytics and traffic management solution. When it comes to video, this means a video gateway that is able to perform deep media inspection and to extract data points that can be exported into a reporting engine. The data exported cannot be just a dump of every event or every session. The reporting engine is only going to be as good as the quality of the data that is fed into it. This is why traffic management products must be designed with analytics in mind from the ground up if they are to be efficiently integrated within an experience assurance framework.

Tuesday, June 17, 2014

Are we ready for experience assurance? part I




As mentioned before, Quality of Experience (QoE) was a major theme in 2012-2013. How to detect, measure and manage various aspects of the customer experience has taken precedence in many cases to savings or monetization rhetoric at vendors and operators alike.

As illustrated in a recent telecoms.com survey, Operators see network quality as the most important differentiator in their market. They would like to implement in their overwhelming majority, business models where they receive revenue share for a guaranteed level of quality.  The problem comes with defining what quality means in a mobile network.


It is clear that many network operators in 2014 have come to the conclusion that they are ill-equipped to understand the consumer’s experience when it comes to data services in general and video in particular. It is not rare that a network operator’s customer care center would receive complaints about the quality of the video service, when no alarm, failure or even congestion has been detected. Obviously, serving your clients when you are blind to their experience is a recipe for churn.

As a result, many operators have spent much of 2013 requesting information and evaluating various vendors’ capability to measure video QoE.  We have seen (here and here) the different type of video QoE measurement. 

This line of questioning has spurred a flurry of product launches, partnerships and announcements in the field of analytics. Here is a list of announcements in the field in the last few months:
  • Procera Networks partners with Avvasi
  • Citrix partners with Zettics and launches ByteMobile Insight
  • Kontron partners with Vantrix and launches cloud based analytics
  • Sandvine launches the Real Time Entertainment Dashboard
  • Guavus partners with Opera Skyfire
  • Alcatel Lucent launches Motive Big Network Analytics
  • Huawei partners with Actix to deliver customer experience analytics…

Suddenly, everyone who has a web GUI and a reporting engine deliver delicately crafted analytics, surfing the wave of big data, Hadoop and NFV as a means to satisfy the operators’ ever growing need for actionable insight.

Unfortunately, in some cases, the operator will find itself with a collection of ill-fitting dashboards providing anecdotic or contradictory data. This is likely to lead to more confusion than problem solving. So what is (should be) experience assurance? The answer in tomorrow's post.