Showing posts with label containers. Show all posts
Showing posts with label containers. Show all posts

Thursday, July 31, 2025

The Orchestrator Conundrum strikes again: Open RAN vs AI-RAN

10 years ago (?!) I wrote about the overlaps and potential conflicts of the different orchestration efforts between SDN and NFV. Essentially, observing that, ideally, it is desirable to orchestrate network resources with awareness of services and that service and resource orchestration should have hierarchical and prioritized interactions, so that a service deployment and lifecycle is managed within resource capacity and when that capacity fluctuates, priorities can be enforced.

Service orchestrators have not really been able to be successfully deployed at scale for a variety a reasons, but primarily due to the fact that this control point was identified early on as a strategic effort for network operators and traditional network vendors. A few network operators attempted to create an open source orchestration model (Open Source MANO), while traditional telco equipment vendors developed their own versions and refused to integrate their network functions with the competition. In the end, most of the actual implementation focused on Virtual Infrastructure Management (VIM) and vertical VNF management, while orchestration remained fairly proprietary per vendor. Ultimately, Cloud Native Network Functions appeared and were deployed in Kubernetes inheriting its native resource management and orchestration capabilities.

In the last couple of years, Open RAN has attempted to collapse RAN Element Management Systems (EMS), Self Organizing Networks (SON) and Operation Support Systems (OSS) with the concept of Service Management and Orchestration (SMO). Its aim is to ostensibly provide a control platform for RAN infrastructure and services in a multivendor environment. The non real time RAN Intelligent Controller (RIC) is one of its main artefacts, allowing the deployment of rApps designed to visualize, troubleshoot, provision, manage, optimize and predict RAN resources, capacity and capabilities.

This time around, the concept of SMO has gained substantial ground, mainly due to the fact that the leading traditional telco equipment manufacturers were not OSS / SON leaders and that Orchestration was an easy target for non RAN vendors wanting to find a greenfield opportunity. 

As we have seen, whether for MANO or SMO, the barriers to adoption weren't really technical but rather economic-commercial as leading vendors were trying to protect their business while growing into adjacent areas.

Recently, AI-RAN as emerged as an interesting initiative, positing that RAN compute would evolve from specialized, proprietary and closed to generic, open and disaggregated. Specifically, RAN compute could see an evolution, from specialized silicon to GPU. GPUs are able to handle the complex calculations necessary to manage a RAN workload, with spare capacity. Their cost, however, greatly outweighs their utility if used exclusively for RAN. Since GPUs are used in all sorts of high compute environments to facilitate Machine Learning, Artificial Intelligence, Large and Small Language Models, Models Training and inference, the idea emerged that if RAN deploys open generic compute, it could be used both for RAN workloads (AI for RAN), as well as workloads to optimize the RAN (AI on RAN and ultimately AI/ML workloads completely unrelated to RAN (AI and RAN).

While this could theoretically solve the business case of deploying costly GPUs in hundreds of thousands of cell site, provided that the compute idle capacity could be resold as GPUaaS or AIaaS, this poses new challenges from a service / infrastructure orchestration standpoint. AI RAN alliance is faced with understanding orchestration challenges between resources and AI workloads

In an open RAN environment. Near real time and non real time RICs deploy x and r Apps. The orchestration of the apps, services and resources is managed by the SMO. While not all App could be categorized as "AI", it is likely that SMO will take responsibility for AI for and on RAN orchestration. If AI and RAN requires its own orchestration beyond K8, it is unlikely that it will be in isolation from the SMO.

From my perspective, I believe that the multiple orchestration, policy management and enforcement points will not allow a multi vendor environment for the control plane. Architecture and interfaces are still in flux, specialty vendors will have trouble imposing their perspective without control of the end to end architecture. As a result, it is likely that the same vendor will provide SMO, non real time RIC and AI RAN orchestration functions (you know my feelings about near real time RIC)

If you make the Venn diagram of vendors providing / investing in all three, you will have a good idea of the direction the implementation will take.

Monday, November 18, 2019

Announcing Edge computing and hybrid clouds workshops

After working 5 years on edge computing and potentially being one of the only analysts having evaluated, then developed and deployed the technology in a telco networks, I am happy to announce immediate availability of the following workshops:

Hybrid and edge computing strategy
  • Hybrid cloud and Edge computing opportunity 
  • Demand for hybrid and edge services (internal and external)
  • Wholesale or retail business?
  • Edge strategies: what, where, when, how?
  • Hyperscalers strategies, positions, risks and opportunities
  • Operators strategies
  • Conclusions and recommendations

Edge computing Technology
  • Technological trends
  • SDN, NFV, container, lifecycle management
  • Open source, ONF, TIP, Akraino, MobiledgeX, Ori
  • Networks disaggregation, Open RAN, Open OLT
  • Edge computing: Build or buy?
  • Nokia, Ericsson, Huawei
  • Dell, Intel, …
  • Open compute, CORD
  • Conclusions and recommendations

Innovation and transformation processes
  • Innovation process and methodology 
  • How to jumpstart technological and commercial innovation
  • Labs, skills, headcount and budget
  • How to transition from innovation to commercial deployment
  • How to scale up sustainably
Drop me a line if you are interested.

Monday, April 10, 2017

Telefonica's innovation framework

I have received many requests over the last months to explain in more details our innovation process. Now that our innovation methodology is a widely commented Harvard Business Review Case Study, I thought it was a good time to shed some light on how a large telco such as Telefonica can innovate in a fast paced environment.
Innovation is not only a decision, it's a process, a methodology. In our case we have different teams looking after external innovation, through business ventures and venture capital and internal looking after networks, data, and moonshots. The teams that I support, focusing on networks innovation are adapting the lean elephant methodology to invent tomorrow's mobile, fixed and TV networks.

Ideation

The process starts with directed ideation, informed by our corporate customer segmentation, customer sentiment studies and selected themes. An innovation call centered around specific themes such as "imagine tomorrow's TV" or "Artificial intelligence and networks QoE" is launched across the group, with local briefings including our selection parameters. A jury is convened to review the hundreds of ideas and shortlist the most interesting. The selected intrapreneurs have a month to prepare a formal pitch for their ideas. They are assisted by customer experience specialists who help them refine the problem they seek to resolve, its applicability and market appeal.

Feasibility

After the pitch and selection, the intrapreneurs are transitioned to the innovation team full time and given a few weeks to create a feasibility plan and preliminary resource budget for prototyping. Once ready, the successful applicants present the plan in details to the jury.

Prototyping

The lucky few that pass this gate are given 3 to 8 months to prototype their project, together with commensurate resource. At this stage, the project must have strong internal sponsorship, with verticals or markets within Telefonica who are committing to take the prototype in their labs for functional testing. The resulting prototype, together with the value proposition and addressable market are reviewed before passing to the next phase.

Market trial

The prototype is then hardened and deployed in a commercial network for friendly and limited A/B testing and refinement. This phase can last 2 to 6 months, with increasing number of users and sophistication in measurement of the value proposition's effectiveness. During this phase as well, a full product / service business case is finalized, using the data collected during the market trial.

Productization and transfer


The project meets customer needs? It is innovative and provides differentiation? It is profitable and Telefonica has an unfair advantage in solving real market problems? These are some of the tough questions the intrapreneur and his team must be able to answer before the solution can be productized and eventually transferred to one of our verticals or to create a new one.


This process has been the source of Telefonica's early advances in IoT, big data, smart cities... It has also killed, merged, pivoted and spun off hundreds of projects. The network innovations teams I support are aiming at radically changing networks topology, deployment and value chain using software defined networks, virtualization, containerization and lambda computing all the way to the edge of our networks. We are developers, network hackers, user experience experts, computer scientists, devops engineers,....

The next months will see some exciting announcements on this. Stay tuned.

You can catch me and we can chat about it at the upcoming NFV world congress or TM Forum live.

Monday, October 19, 2015

SDN world 2015: unikernels, compromises and orchestrated obsolescence

Last week's Layer123 SDN and OpenFlow World Congress brought its usual slew of announcements and claims.

From my perspective, I have retained a contrasted experience from the show. 

On one hand, it is clear that SDN has now transitioned from proof of concept to commercial trial, if not full commercial deployment and operators are now increasingly understanding the limits of open source initiatives such as OpenStack for carrier-grade deployments. The telling sign is the increasing number of companies specialized in OpenFlow or other protocols high performance hardware based switches.

It feels that Open vSwitch has not hit its stride, notably in term of performance and operators are left with either going open source, cost efficient but not scalable nor performing or compromising with best of breed, hardware-based, hardened switches that offer high performance and scalability but not the agility of software-based implementation yet. What is new, however, is that operators seem ready to compromise for time to market, rather than wait for a possibly more open solution that could -  or not - deliver on its promises.

On the NFV front, I feel that many vendors have been forced to lower their silly claims in term of performance, agility and elasticity. It is quite clear that many of them have been called to prove themselves in operators' labs and have failed to deliver. In many cases, vendors are able to demonstrate agility, through VM porting / positioning using either their VNFM or an orchestrator's integration, they are even, in some cases, able to show some level of elasticity with auto-scaling powered by their own EMS, and many have put out press releases with Gbps or Tbps or millions of simultaneous sessions of capacity...
... but few are able to demonstrate all three at the same time, since their performance achievement has, in many cases been relying on SR-IOV to bypass the hypervisor layer, which ties the VM to the CPU in a manner that makes agility and elasticity extremely difficult to achieve.
Operators, here again, seem bound to compromise between performance or agility if they want to accelerate their time to market.

Operators themselves came in troves to show their progress on the subject, but I felt a distinct change in tone in term of their capacity to effectively get vendors deliver on the promises of the NFV successive white papers. One issue lies flatly on the operators' attitude themselves. Many MNO are displaying unrealistic and naive expectations. They say that they are investing in NFV as a means to attain vendor independence but they are unwilling to perform any integration themselves. It is very unlikely that large Telecom Equipment Manufacturer will willingly help deconstruct their value proposition by offering commoditized, plug-and-play, open interfaced virtualized functions.

SDN and NFV integration is still dirty work. Nothing really performs at line rate without optimization, no agility, flexibility, scalability is really attained without fine tuned integration. Operators won't realize the benefits of the technology if they don't get in on the integration work themselves.

At last, what is still missing from my perspective is a service creation strategy that would make use of a virtualized network. Most network operators still mention service agility and time to market as a key driver, but when asked what they would launch if their network was fully virtualized and elastic today, they quote disappointing early examples such as virtual (!?) VPN, security or broadband on demand... timid translations of existing "services" in a virtualized world. I am not sure most of the MNOs realize their competition is not each other but Google, Netflix, Uber, Facebook and others...
By the time they launch free and unlimited voice, data and messaging services underpinned by advertising or sponsored model, it will be quite late to think of new services, even if the network is fully virtualized. It feels like MNOs are orchestrating their own obsolescence.

At last, the latest buzzwords you must have in your presentation this quarter are:
The pet and cattle analogy, 
SD WAN,
5G

...and if you haven't yet formulated a strategy with respect to containers (Dockers, etc...) don't bother, they're dead and the next big thing are unikernels. This and more in my latest report and workshop on "SDN NFV in wireless networks 2015 / 2016".

Monday, June 8, 2015

Data traffic optimization feature set

Data traffic optimization in wireless networks has reached a mature stage as a technology . The innovations that have marked the years 2008 – 2012 are now slowing down and most core vendors exhibit a fairly homogeneous feature set. 

The difference comes in the implementation of these features and can yield vastly different results, depending on whether vendors are using open source or purpose-built caching or transcoding engines and whether congestion detection is based on observed or deduced parameters.

Vendors tend nowadays to differentiate on QoE measurement / management, monetization strategies including content injection, recommendation and advertising.

Here is a list of commonly implemented optimization techniques in wireless networks.
  •  TCP optimization
    • Buffer bloat management
    • Round trip time management
  • Web optimization
    • GZIP
    •  JPEG / PNG… transcoding
    • Server-side JavaScript
    • White space / comments… removal
  • Lossless optimization
    • Throttling / pacing
    • Caching
    • Adaptive bit rate manipulation
    • Manifest mediation
    • Rate capping
  • Lossy optimization
    • Frame rate reduction
    • Transcoding
      • Online
      • Offline
      • Transrating
    • Contextual optimization
      • Dynamic bit rate adaptation
      • Device targeted optimization
      • Content targeted optimization
      • Rule base optimization
      • Policy driven optimization
      • Surgical optimization / Congestion avoidance
  • Congestion detection
    • TCP parameters based
    • RAN explicit indication
    • Probe based
    • Heuristics combination based
  • Encrypted traffic management
    • Encrypted traffic analytics
    • Throttling / pacing
    • Transparent proxy
    • Explicit proxy
  • QoE measurement
    • Web
      • page size
      • page load time (total)
      • page load time (first rendering)
    • Video
      • Temporal measurements
        • Time to start
        • Duration loading
        • Duration and number of buffering interruptions
        • Changes in adaptive bit rates
        • Quantization
        • Delivery MOS
      • Spatial measurements
        • Packet loss
        • Blockiness
        • Blurriness
        • PSNR / SSIM
        • Presentation MOS


An explanation of each technology and its feature set can be obtained as part of the mobile video monetization report series or individually as a feature report or in a workshop.

Tuesday, January 3, 2012

For or against Adaptive Bit Rate? part I: what is ABR?

Adaptive Bit Rate streaming (ABR) was invented to enable content providers to provide video streaming services in environment in which bandwidth would fluctuate. The benefit is clear, as a connection capacity changes over time, the video carried over that connection can vary its bit rate, and therefore its size to adapt to the network conditions.The player or client and the server exchange discrete information on the control plane throughout the transmission, whereby the server exposes the available bit rates for the video being streamed and the client selects the appropriate version, based on its reading of the current connection condition.

The technology is fundamental to help accommodate the growth of online video delivery over unmanaged (OTT) and wireless networks.
The implementation is as follow: a video file is encoded into different streams, at different bit rates. The player can "jump" from one stream to the other, as the condition of the transmission degrades or improves. A manifest document is exchanged between the server and the player at the establishment of the connection for the player to understand the list of versions and bit rates available for delivery.

Unfortunately, the main content delivery technology vendors then started to diverge from the standard implementation to differentiate and control better the user experience and the content provider community. We have reviewed some of these vendor strategies here. Below are the main implementations:

  • Apple HTTP Adaptive (Live) streaming (HLS) for iPhone and iPad: This version is implemented over HTTP and MPEG2 TS. It uses a proprietary manifest called m3u8. Apple creates different versions of the same streams (2 to 6, usually) and  breaks down the stream into little “chunks” to facilitate the client jumping from one stream to the other. This results in thousands of chunks for each stream, identified through timecode.Unfortunately, the content provider has to deal with the pain of managing thousands of fragments for each video stream. A costly implementation.
  • Microsoft IIS Smooth Streaming (Silverlight Windows phone 7): Microsoft has implemented fragmented MP4 (fMP4), to enable a stream to be separated in discrete fragments, again, to allow the player to jump from one fragment to the other as conditions change.  Microsoft uses AAC for audio and AVC/H264 for video compression. The implementation allows to group each video and audio stream, with all its fragments in a single file,  providing a more cost effective solution than Apple's.
  • Adobe HTTP Dynamic Streaming (HDS) for Flash: Adobe uses a proprietary format called F4F to allow delivery of flash videos over RTMP and HTTP. The Flash Media Server creates multiple streams, at different bit rate but also different quality levels.  Streams are full lengths (duration of video).

None of the implementations above are inter-operable, from a manifest or from a file perspective, which means that a content provider with one 1080p HD video could see himself creating one version for each player, multiplied by the number of streams to accommodate the bandwidth variation, multiplied by the number of segments, chunks or file for each version... As illustrated above, a simple video can result in 18 versions and thousand of fragments to manage. This is the reason why only 4 to 6% of current videos are transmitted using ABR. The rest of the traffic uses good old progressive download, with no capacity to adapt to changes in bandwidth, which explains in turn why wireless network operators (over 60 of them) have elected to implement video optimization systems in their networks. We will look, in my next posts, at the pros and cons of ABR and the complementary and competing technologies to achieve the same goals.

Find part II of this post here.

Tuesday, June 7, 2011

Google vs. rest of the world: WebM vs H264

After the acquisition of ON2 technologies, as anticipated, Google has open-sourced its video codec WebM (video format VP8 and audio format Ogg Vorbis encapsulated in Matroska container) as an attempt to counter-act H.264.

The issue:
In  the large-scale war between giants Apple, Adobe, Microsoft and Google, video has become the latest battlefield. As PCs and mobile devices consume more and more video, the four companies battle to capture content owners and device manufacturers mind share, in order to ensure user experience market dominance.
Since video formats and protocols are fairly well standardized, the main area for differentiation remains codecs.
Codecs are left to anyone's implementation choice. The issue can be thorny, though, as most codecs used to decode / encode specific formats require payments of license to intellectual property owners.
For instance, the H.264 format is managed by MPEG LA, who has assembled a pool of patents associated with the format, from diverse third parties and is licensing its usage, collecting and redistributing royalties on behalf of the patent owners. H.264 is a format used for transmission of videos in variable bandwidth environment and has been adopted by most handset manufacturers, Microsoft, Apple and Adobe as the de-facto format.

If you are Google, though, the idea of paying license to third parties, that are in most case direct competitors for something as fundamental as video is a problem.

The move:
As a result, Google has announced that they are converting all of Youtube most watched videos to WebM and that the format becomes the preferred one for all Google properties (Youtube, Chrome...).
The purpose here, is for Google to avoid paying royalties to MPEG LA, while controlling user experience by trying to integrate vertically the content owners, browser and device manufacturers codec usage.

It does not mean that Google will stop supporting other formats (flash, H.264...) but the writing is on the wall, if they can garner enough support.

The result:
It is arguable whether WebM can actually circumvent MPEG LA H.264 royalty claims. There are already investigations ongoing as to whether VP8 is not infringing any of H.264 intellectual property. Conversely, the U.S. Department of Justice is investigating whether MPEG LA practices are stifling competition.

In the meantime, content owners, device manufacturers, browser vendors have to contend with one new format and codec, increasing the fragmentation in this space and reducing interoperability.

Sunday, May 15, 2011

Mobile video 101: protocols, containers, formats & codecs

Mobile video as a technology and market segment can at times be a little complicated.
Here is simple syllabus, in no particular order of what you need to know to be conversant in mobile video. It is not intended to be exhaustive or very detailed, but rather to provide a knowledge base for those interested in understanding more the market dynamics I address in other posts.


Protocols:
There are many protocols used in wireless networks to deliver and control video. You have to differentiate between routing protocols (IP), transmission protocols (TCP & UDP), session control (RTP), application control (RTSP) and content control protocols (RTCP). I will focus here on application and content control.
These protocols are used to setup, transmit and control video over mobile networks

Here are the main ones:
  • RTSP (Real Time Streaming Protocol) is an industry protocol that has been created specifically for the purposes of media streaming. It is used to establish and control (play, stop, resume) a streaming session. It is used in many unicast on-deck mobile TV and VOD services.
  • RTCP (Real Time transport Control Protocol) is the content control protocol associated with RTP. It provides the statistics (packet loss, bit transmission, jitter...) necessary to allow a server to perform real-time media quality control on an RTSP stream.
  • HTTP download and progressive download (PD). HTTP is a generic protocol, used for the transport of many content formats, including video. Download and progressive download differentiate from each other in that the former needs the whole content to be delivered and saved to the device to be played asynchronously, while the later provides at the beginning of the session a set of metadata associated with the content which allow it to be played before its complete download.
    • Microsoft silverlight, Adobe RTMP and Apple progressive streaming. These three variants of progressive download are proprietary. They offer additional capabilities beyond the vanilla HTTP PD (pre-encoding and multiple streams delivery, client side stream selection, chunk delivery...) and are the subject of an intense war between the three companies to occupy the mindset of content developers and owners. This is the reason why you cannot browse a flash site or view a flash video in your iPhone.
Containers:
A container in video is a file that is composed of the payload (video, audio, subtitles, programming guide...) and the metadata (codecs, encoding rate, key frames, bit-rate...). The metadata is a set of descriptive files that indicate the nature of the media, its duration in the payload. The most popular are:
  • 3GPP (.3GP) 3GP is the format used in most mobile devices, as the recommended container for video by 3GPP standards.
  • MPEG-4 part 14 (.MP4) one of the most popular container for internet video.
  • Flash video (FLV, F4V). Adobe-created container, very popular as the preferred format for BBC, Google Video, Hulu, metacafe, Reuters, Yahoo video, YouTube... It requires a flash player.
  • MPEG-2 TS: MPEG Transport Stream is used for broadcast of audio and video. It is used in on-deck broadcast TV services in mobile and cable/ satellite video delivery.
Formats
Formats are a set of standards that describe how a video file should be played.

  • H.263 old codec used in legacy devices and applications. It is mandated by ETSI and 3GPP for IMS and MMS but is being replaced by H.264
  • H.264, MPEG4 part 10, AVC is a family of standards composed of several profiles for different use, device types, screen sizes... It is the most popular format in mobile video.
  • MPEG2 is a standard for lossy audio and video compression used in DVD, broadcast (digital TV, over the air, cable, satellite). MPEG2 describes two container types: MPEG2-TS for broadcast, MPEG-2 PS for files.
  • MPEG4 is an evolution of MPEG2, adding new functionalities such as DRM, 3D and error resilience for transmission over lossy channels (wireless for instance).  There are many features in MPEG 4, that are left to the developer to decide whether to implement or not. The features are grouped by profiles and levels. There are 28 profiles or part in MPEG 4. A codec usually describe which MPEG-4 parts are supported. It is the most popular format on the internet.
Codecs
Codec stands for encoding and decoding a media stream. It is a program that has the ability to decode a video stream and re encode it. Codecs are used for compression (lossless), optimization (lossy) and encryption of videos. A "raw" video file is usually stored in YCbCr (YUV) format which provides the full description of every pixel in a video. This format is descriptive, which requires a lot of space for storage and a lot of processing power for decoding / encoding. This is why a video is usually encoded in a different codec, to allow for a better size or variable transmission quality. It is important to understand that while a container obeys strict rules and semantics, codecs are not regulated and each vendor decides how to decode and encode a media format.
  • DivX Proprietary MPEg-4 implementation by DivX
  • WMV (Windows Media Video) - Microsoft proprietary
  • x264 a licenseable H.264 encoding spoftware
  • VP6, VP7, VP8... proprietary codecs developed by On2 technologies, acquired by Google and released as open source