For those who know me, you know that I have been a firm supporter of openness by design for a long time. It is important not to conflate openness and open source when it comes to telco strategy, though.
Most network operators believe that any iteration of their network elements must be fully interoperable within their internal ecosystem (their network) and their external ecosystem (other telco networks). This is fundamentally what allows any phone user to roam and use any mobile networks around the planet.
This need for interoperability has reinforced the importance of standards such as ETSI and 3GPP and forums such as GSMA over the last 20 years. This interoperability by design has led to the creation of rigid interfaces, protocols and datagrams that preside over how network elements should integrate and interface in a telco and IP network.
While this model has worked well for the purpose of creating a unified global aggregation of networks with 3G/4G, departing from the fragmentation of 2G (GSM, CDMA, TDMA, AMPS...), it has also somewhat slowed down and stifled the pace of innovation for network functions.
The last few years have seen an explosion of innovation in networks, stemming from the emergence of data centers, clouds, SDN and virtualization. The benefits have been incredible, ranging from departing from proprietary hardware dependency, increased multi tenancy, resource elasticity, traffic programmability, automation and ultimately the atomization of network functions into microservices. This allowed the creation of higher level network abstractions without the need for low level programming or coding (for more on this, read anything ever written by the excellent Simon Wardley). These benefits have been systematically developed and enjoyed by those companies that needed to scale their networks the fastest: the webscalers.
In the process, as the technologies underlying these new networks passed from prototype, to product, to service, to microservice, they have become commoditized. Many of these technologies, once close to maturity, have been open sourced, allowing a community of similarly interested developers to flourish and develop new products and services.
Telecom operators were inspired by this movement and decided that they needed as well to evolve their networks to something more akin to an elastic cloud in order to decorrelate traffic growth from cost. Unfortunately, the desire for interoperability and the lack of engineering development resources led operators to try to influence and drive the development of a telco open source ecosystem without really participating in it. NFV (Networks function Virtualization) and telco Openstack are good examples of great ideas with poor results.Let's examine why:
NFV was an attempt to separate hardware from software, and stimulate a new ecosystem of vendors to develop telco functions in a more digital fashion. Unfortunately, the design of NFV was a quasi literal transposition of appliance functions, with little influence from SDN or micro service architecture. More importantly, it relied on an orchestration function that was going to become the "app store" of the network. This orchestrator, to be really vendor agnostic would have to be fully interoperable with all vendors adhering to the standard and preferably expose open interfaces to allow interchangeability of the network functions, and orchestrator vendors. In practice, none of the traditional telecom equipment manufacturers had plans to integrate with a third party orchestrators and would try to deploy their own as a condition for deploying their network functions. Correctly identifying the strategic risk, the community of operators started two competing open source projects: Open Source Mano (OSM) and Open Network Automation Platform (ONAP).
Without entering into the technical details, both projects suffered at varying degree from a cardinal sin. Open source development is not a spectator sport. You do not create an ecosystem or a community of developer. You do not demand contribution, you earn it. The only way open source projects are successful is if their main sponsors actively contribute (code, not diagrams or specs) and if the code goes into production and its benefits are easily illustrated. In both cases, most operators have opted on relying heavily on third party to develop what they envisioned, with insufficient real life experience to ensure the results were up to the task. Only those who roll their sleeves and develop really benefit from the projects.
Openstack was, in comparison, already a successful ecosystem and open source development forum when telco operators tried to bend it to their purpose. It had been deployed in many industries, ranging from banking, insurances, transportation, manufacturing, etc... and had a large developer community. Operators thought that piggybacking on this community would accelerate development and of an OpenStack suited for telco operation. The first efforts were to introduce traditional telco requirements (high availability, geo redundancy, granular scalability...) into a model that was fundamentally a best effort IT cloud infrastructure management. As I wrote 6 years ago, OpenStack at that stage was ill-suited for the telco environment. And it remained so. Operators resisted hiring engineers and coding sufficient functions into OpenStack to make it telco grade, instead relying on their traditional telco vendors to do the heavy lifting for them.
The lessons here are simple.
If you want to build a network that is open by design, to ensure vendor independence, you need to manage the control layer yourself. In all likeliness, tring to specify it and asking others to build it for you will fail if you've never built one before yourself.
Open source can be a good starting point, if you want to iterate and learn fast, prototype and test, get smart enough to know what is mature, what can should be bought, what should be developed and where is differential value. Don't expect open source to be a means for others to do your labour. The only way you get more out of open source than you put in is a long term investment with real contribution, not just guidance and governance.
Showing posts with label virtualized. Show all posts
Showing posts with label virtualized. Show all posts
Wednesday, January 8, 2020
Monday, November 18, 2019
Announcing Edge computing and hybrid clouds workshops
After working 5 years on edge computing and potentially being one of the only analysts having evaluated, then developed and deployed the technology in a telco networks, I am happy to announce immediate availability of the following workshops:
Edge computing Technology
Innovation and transformation processes
Hybrid and edge computing strategy
- Hybrid cloud and Edge computing opportunity
- Demand for hybrid and edge services (internal and external)
- Wholesale or retail business?
- Edge strategies: what, where, when, how?
- Hyperscalers strategies, positions, risks and opportunities
- Operators strategies
- Conclusions and recommendations
- Technological trends
- SDN, NFV, container, lifecycle management
- Open source, ONF, TIP, Akraino, MobiledgeX, Ori
- Networks disaggregation, Open RAN, Open OLT
- Edge computing: Build or buy?
- Nokia, Ericsson, Huawei
- Dell, Intel, …
- Open compute, CORD
- Conclusions and recommendations
- Innovation process and methodology
- How to jumpstart technological and commercial innovation
- Labs, skills, headcount and budget
- How to transition from innovation to commercial deployment
- How to scale up sustainably
Saturday, November 16, 2019
Edge Computing or hybrid cloud?
Edge computing has been gaining much recognition and hype since I started working on it 5 years ago. I am in a fortunate position to have explored it, as an analyst, being one of the early participants of ETSI's Industry Standardization Group on Multi Access edge Computing (MEC), then develop and deploy it as an operator for Telefonica group and now, back to advising vendors and service providers on the strategies and challenges associated with its development.
One of the key challenges associated with edge computing is that pretty much every actor in the value chain (technology vendors, colocation and hosting companies, infrastructure companies, telecommunication operators, video streaming, gaming and caching services, social media and internet giants, cloud leaders...) is coming at it from a different perspective and perception of what it could (and should) do. This invariably leads to much misunderstanding, as each one is trying to understand the control points in the value chain and assert their position.
- Technology vendors see a chance to either entrench further their position, based on proprietary, early implementation or disrupt traditional vendors oligopoly, based on open (source) disaggregated networking. Traditional blue chip vendors see also a chance to move further down the path of black box networks replacement by white box.
- Colocation and hosting companies do not quite see why edge is that much different from cloud hosting that they have been doing all along, but are happy to jump on the bandwagon if it means better margins.
- Infrastructure company see a chance to move up the value chain, by providing differentiated value added real estate and connectivity services.
- Telecommunications operators tend to see edge computing as a possible opportunity to rejoin the cloud war, after having lost the last battles. The promise of futuristic 5G-like services for drones, remote surgery, autonomous cars, etc... is certainly what they have been communicating about but that is not going to materialize tangible revenue streams before 5 to 7 years. There are other short term revenues that can be created by the technology deployment.
- Video streaming, gaming and caching services feel that they have been the edge pioneers, with specialized services or physical slices, deep in cloud and teco networks. They tend to resist the move from physical, proprietary appliances towards the open, multi-tenant model that would make the business more profitable for all.
- Social media and internet giants ted to feel that there is something they should, or could do there, but most of their infrastructure relies either on their proprietary private cloud or on public clouds and it is unclear whether these models are compatible.
- At last, the cloud leaders certainly see edge computing as a growth opportunity to offer differentiated cloud services and performance, but again, they are unsure whether to push the limit of their cloud or integrate with others.
I feel that we have started from the wrong foot here. There is no such thing as edge computing. There is a cloud, and there are devices and data centers. The largest, most impactful performance move cloud can make is to integrate with the last mile - the telco networks.Where you want a workload to run, a dataset to reside, a pipeline to transit through should be the result of:
- What is available in terms of capacity
- What is your budget / needs in terms of workload, performance, latency
- How much it costs / what is the price to run where
- What are the legal / regulatory restrictions with respect to locality, sovereignty, privacy...
The rest should be easily enough programmatically calculated. For this to occur, there is still much work to be done. The "plumbing", which is how to connect and administer heterogeneous clouds is almost there. The largest effort is really for these industries to come together on the reservation, consumption and fulfillment model. We might be able to live today with Amazon, Microsoft, Alibaba and Google cloud models, but we certainly won't be able to accomodate a lot more.
This means we need an industry wide effort for cloud hybridization at the business layer. It is necessary for all network operators to present the same set of APIs and connectivity services to all the cloud operators if we want to see this market move in the right direction.
Tuesday, December 4, 2018
Edge computing and Telecom Infra Project
Video taken for the 2018 Telecom infra Project summit in London.
Wednesday, November 7, 2018
The edge computing and access virtualization opportunity
Have you ever tried to edit a presentation online, without downloading it? Did you try to change a diagram or the design of a slide and found it maddening? It is slow to respond, the formatting and alignment are wrong… you ended up downloading it to edit it locally?
Have you ever had to upload a very important and large file? I am talking about tens of gigabytes. The video of your marriage or the response to a commercial tender that necessitated hundreds of hours of work? Did you then look at that progress bar slowly creeping up or the frustratingly revolving hourglass spinning for minutes on hand?
Have you ever bought the newest, coolest console game only to wait for the game to update, download and install for 10, 20, 30 minutes?
These are a few examples of everyday occurrences, which are so banal that they are part of our everyday experience. We live through them accepting the inherent frustration because these services are still a progress over the past.
True, the cloud has brought us a new range of experiences, new services and a great increase in productivity. With its ubiquity, economy of scale and seemingly infinite capacity, the cloud offers an inexpensive, practical and scalable way to offer global services.
So why are we still spending so much money on phones, computers, game consoles,… if most of the intelligence can be in the cloud and just displayed on our screens?
The answer is complex. We value as well immediacy, control, and personalization. Attributes that cloud services struggle to provide all at once. Immediacy is simple; we do not like to wait. That is why even though it might be more practical or economical storing all content or services on mega data centers on the other side of the planet; we are not willing to wait for our video to start, for our search to display, for our multiplayer game to react…
Control is more delicate. Privacy, security, regulatory mandates are difficult to achieve in a hyper-distributed, decentralized internet. That is why even though we trust our online storage account, we still store file on our computer’s hard drive, pictures on our phone, and game saves in our console.
Personalization is even more elusive. Cloud services do a great job of understanding our purchase history, viewing, likes etc… but there still seems to be a missing link between these services and the true context when you are at home teleworking and you want to make sure your video conference is going to be smooth while your children play video games on the console and live streaming a 4K video.
As we can see, there are still services and experiences that are not completely satisfied by the cloud. For these we keep relying on expensive devices at home or at work and accept the limitations of today’s technologies.
Edge computing and service personalization is a Telefonica Networks Innovation project that promises to solve these issues, bringing the best of the cloud and on premise to your services.
The idea is to distribute further the cloud to Telefonica’s data centers and to deploy these closer to the users. Based on the Unica concepts of network virtualization, applied to our access networks (mobile, fiber residential and enterprise), edge computing allows to deploy services, content and intelligence a few milliseconds away from your computer, your phone or your console.
How does it work? It is simple. A data center is deployed in our central office, based on open architecture and interfaces, allowing to deploy our traditional TV, fixed and mobile telephony and internet residential and corporate services. Then, since the infrastructure is virtualized and open, it allows to rapidly deploy third party services, from your favorite game provider, to your trusted enterprise office applications or your mobile apps. Additionally, the project has virtualized, disaggregated and virtualized part of our access networks (OLT for the fiber, baseband unit for the mobile, WAN for the enterprise), and radically simplified it.
The result is what is probably the world’s first multi access edge computing platform on residential, enterprise and mobile access that is completely programmable. It allow us for the first time to provide a single transport, a single range of service to all our customers, where we differentiate only the access.
What does it change? Pretty much everything. All of sudden, you can upload a large 1GB file to your personal storage in 6 seconds instead of the 5 minutes it took on the cloud. You can play your favorite multiplayer game online without console. You can edit this graphic file online without having to download it. …And these are just existing services that are getting better. We are also looking at new experiences that will surprise you. Stay tuned!
Thursday, October 4, 2018
Telefonica and the edge computing opportunity
Real world use cases of edge Computing With Intel, Affirmed Networks and Telefonica
Monday, June 13, 2016
Time to get out of consumer market for MNOs?
I was delivering a workshop on SDN / NFV in wireless, last week, at a major pan-european tier one operator group and the questions of encryption and net neutrality were put again on the table.
How much clever, elastic, agile software-defined traffic management can we really expect when "best effort" dictates the extent of traffic management and encryption renders many efforts to just understand traffic composition and velocity difficult?
There is no easy answer. I have spoken at length on both subjects (here and here, for instance) and the challenges have not changed much. Encryption is still a large part of traffic and although it is not growing as fast as initially planned after Google, Netflix, Snapchat or Facebook's announcements it is still a dominant part of data traffic. Many start to think that HTTPS / SSL is a first world solution, as many small and medium scale content or service providers that live on a freemium or ad-sponsored models can't afford the additional cost and latency unless they are forced to. Some think that encryption levels will hover around 50-60% of the total until mass adoption of HTTP/2 which could take 5+ years. We have seen, with T-Mobile's binge on a first service launch that actively manages traffic, even encrypted to an agreed upon quality level. The net neutrality activists cried fool at the launch of the service, but quickly retreated when they saw the popularity and the first tangible signs of collaboration between content providers, aggregators and operators for customers' benefit.
As mentioned in the past, the problem is not technical, moral or academic. Encryption and net neutrality are just symptoms of an evolving value chain where the players are attempting to position themselves for dominance. The solution with be commercial and will involve collaboration in the form of content metadata exchange, to monitor, control and manage traffic. Mobile Edge Computing can be a good enabler in this. Mobile advertising, which is still missing over 20b$ in investment in the US alone when compared to other media and time spent / eyeball engagement will likely be part of the equation as well.
...but what happens in the meantime, until the value chain realigns? We have seen consumer postpaid ARPU declining in most mature markets for the last few years, while we seen engagement and usage of so-called OTT services explode. Many operators continue to keep their head in the sand and thinking of "business as usual" while timidly investigating new potential "revenue streams".
I think that the time has come for many to wake up and take hard decisions. In many cases, operators are not equipped organizationally or culturally for the transition that is necessary to flourish in a fluid environment where consumer flock to services that are free, freemium, or ad sponsored. What operators know best, subscription services see their price under intense pressure because OTTs are looking at usage and penetration at global levels, rather than per country. For these operators who understand the situation and are changing their ways, the road is still long and with many obstacles, particularly on the regulatory front, where they are not playing by the same rules as their OTT competition.
I suggest here that for many operators, it is time to get out. You had a good run, made lots of money on consumer services through 2G, 3G and early 4G, the next dollars or euros are going to be tremendously more expensive to get than the earlier.
At this point, I think there are emerging and underdeveloped verticals (such as enterprise and IoT) that are easier to penetrate (less regulatory barriers, more need for managed network capabilities and at least in the case of enterprise, more investment possibilities).
I think that at this stage, any operator who derives most of its revenue from consumer services should assume that these will likely dwindle to nothing unless drastic operational, organizational and cultural changes occur.
Some operator see the writing on the wall and have started the effort. There is no guarantee that it will work, but certainly having a software defined, virtualized elastic network will help if they are betting the farm on service agility. Others are looking at new technologies, open source and standards as they have done in the past. Aligning little boxes from industry vendors in neat powerpoint roadmap presentations, hiring a head of network transformation or virtualization... for them, the reality, I am afraid will come hard and fast. You don't invest in technologies to build services. You build services first and then look at whether you need more or new technologies to enable them.
How much clever, elastic, agile software-defined traffic management can we really expect when "best effort" dictates the extent of traffic management and encryption renders many efforts to just understand traffic composition and velocity difficult?
There is no easy answer. I have spoken at length on both subjects (here and here, for instance) and the challenges have not changed much. Encryption is still a large part of traffic and although it is not growing as fast as initially planned after Google, Netflix, Snapchat or Facebook's announcements it is still a dominant part of data traffic. Many start to think that HTTPS / SSL is a first world solution, as many small and medium scale content or service providers that live on a freemium or ad-sponsored models can't afford the additional cost and latency unless they are forced to. Some think that encryption levels will hover around 50-60% of the total until mass adoption of HTTP/2 which could take 5+ years. We have seen, with T-Mobile's binge on a first service launch that actively manages traffic, even encrypted to an agreed upon quality level. The net neutrality activists cried fool at the launch of the service, but quickly retreated when they saw the popularity and the first tangible signs of collaboration between content providers, aggregators and operators for customers' benefit.
As mentioned in the past, the problem is not technical, moral or academic. Encryption and net neutrality are just symptoms of an evolving value chain where the players are attempting to position themselves for dominance. The solution with be commercial and will involve collaboration in the form of content metadata exchange, to monitor, control and manage traffic. Mobile Edge Computing can be a good enabler in this. Mobile advertising, which is still missing over 20b$ in investment in the US alone when compared to other media and time spent / eyeball engagement will likely be part of the equation as well.
...but what happens in the meantime, until the value chain realigns? We have seen consumer postpaid ARPU declining in most mature markets for the last few years, while we seen engagement and usage of so-called OTT services explode. Many operators continue to keep their head in the sand and thinking of "business as usual" while timidly investigating new potential "revenue streams".
I think that the time has come for many to wake up and take hard decisions. In many cases, operators are not equipped organizationally or culturally for the transition that is necessary to flourish in a fluid environment where consumer flock to services that are free, freemium, or ad sponsored. What operators know best, subscription services see their price under intense pressure because OTTs are looking at usage and penetration at global levels, rather than per country. For these operators who understand the situation and are changing their ways, the road is still long and with many obstacles, particularly on the regulatory front, where they are not playing by the same rules as their OTT competition.
I suggest here that for many operators, it is time to get out. You had a good run, made lots of money on consumer services through 2G, 3G and early 4G, the next dollars or euros are going to be tremendously more expensive to get than the earlier.
At this point, I think there are emerging and underdeveloped verticals (such as enterprise and IoT) that are easier to penetrate (less regulatory barriers, more need for managed network capabilities and at least in the case of enterprise, more investment possibilities).
I think that at this stage, any operator who derives most of its revenue from consumer services should assume that these will likely dwindle to nothing unless drastic operational, organizational and cultural changes occur.
Some operator see the writing on the wall and have started the effort. There is no guarantee that it will work, but certainly having a software defined, virtualized elastic network will help if they are betting the farm on service agility. Others are looking at new technologies, open source and standards as they have done in the past. Aligning little boxes from industry vendors in neat powerpoint roadmap presentations, hiring a head of network transformation or virtualization... for them, the reality, I am afraid will come hard and fast. You don't invest in technologies to build services. You build services first and then look at whether you need more or new technologies to enable them.
Thursday, May 5, 2016
MEC: The 7B$ opportunity
Extracted from Mobile Edge Computing 2016.
Table of contents
Defining an addressable market for an emerging product or technology is always an interesting challenge. On one hand, you have to evaluate the problems the technology solves and their value to the market, and on the other hand, appreciate the possible cost structure and psychological price expectations from the potential buyer / users.
This warrants a top down and bottoms up approach to look at how the technology can contribute or substitute some current radio and core networks spending, together with a cost based review of the potential physical and virtual infrastructure. [...]
The cost analysis is comparatively easy, as it relies on well understood current cost structure for physical hardware and virtual functions. The assumptions surrounding the costs of the hardware has been reviewed with main x86 based hardware vendors. The VNFs pricing relies on discussions with large and emerging telecom equipment vendors for standard VNFs such as EPC, IMS, encoding, load balancers, DPI… price structure. Traditional telco professional services, maintenance and support costs are apportioned and included in the calculations.
The overall assumption is that MEC will become part of the fabric of 5G networks and that MEC equipment will cover up to 20% of a network (coverage or population) when fully deployed.
The report features total addressable market, cumulative and incremental for MEC equipment vendors and integrator, broken down by CAPEX / OPEX, consumer, enterprises and IoT services.
It then provides a review of operators opportunities and revenue model for each segment.
Defining an addressable market for an emerging product or technology is always an interesting challenge. On one hand, you have to evaluate the problems the technology solves and their value to the market, and on the other hand, appreciate the possible cost structure and psychological price expectations from the potential buyer / users.
This warrants a top down and bottoms up approach to look at how the technology can contribute or substitute some current radio and core networks spending, together with a cost based review of the potential physical and virtual infrastructure. [...]
The cost analysis is comparatively easy, as it relies on well understood current cost structure for physical hardware and virtual functions. The assumptions surrounding the costs of the hardware has been reviewed with main x86 based hardware vendors. The VNFs pricing relies on discussions with large and emerging telecom equipment vendors for standard VNFs such as EPC, IMS, encoding, load balancers, DPI… price structure. Traditional telco professional services, maintenance and support costs are apportioned and included in the calculations.
The overall assumption is that MEC will become part of the fabric of 5G networks and that MEC equipment will cover up to 20% of a network (coverage or population) when fully deployed.
The report features total addressable market, cumulative and incremental for MEC equipment vendors and integrator, broken down by CAPEX / OPEX, consumer, enterprises and IoT services.
It then provides a review of operators opportunities and revenue model for each segment.
Monday, April 4, 2016
MEC 2016 Executive Summary
2016 sees a sea change in the fabric of the
mobile value chain. Google is reporting that mobile search revenue now exceed
desktop, whereas 47% of Facebook members are now exclusively on mobile, which
generates 78% of the company’s revenue. It has taken time, but most OTT
services that were initially geared towards the internet are rapidly
transitioning towards mobile.
The impact is still to be felt across the
value chain.
OTT providers have a fundamentally different
view of services and value different things than mobile network operators. While
mobile networks have been built on the premises of coverage, reliability and
ubiquitous access to metered network-based services, OTT rely on free,
freemium, ad-sponsored or subscription based services where fast access and
speed are paramount. Increase in latency impacts page load, search time and can
cost OTTs billions in revenue.
The reconciliation of these views and the
emergence of a new coherent business model will be painful but necessary and
will lead to new network architectures.
Traditional mobile networks were originally
designed to deliver content and services that were hosted on the network
itself. The first mobile data applications (WAP, multimedia messaging…) were
deployed in the core network, as a means to be both as close as possible to the
user but also centralized to avoid replication and synchronization issues.
3G and 4G Networks still bear the design
associated with this antiquated distribution model. As technology and user
behaviours have evolved, a large majority of content and services accessed on
cellular networks today originate outside the mobile network. Although content
is now stored and accessed from clouds, caches CDNs and the internet, a mobile
user still has to go through the internet, the core network, the backhaul and
the radio network to get to it. Each of these steps sees a substantial decrease
in throughput capacity, from 100's of Gbps down to Mbps or less. Additionally, each hop
adds latency to the process. This is why networks continue to invest in
increasing throughput and capacity. Streaming a large video or downloading a
large file from a cloud or the internet is a little bit like trying to suck ice
cream with a 3-foot bending straw.
Throughput and capacity seem to be
certainly tremendously growing with the promises of 5G networks, but latency
remains an issue. Reducing latency requires reducing distance between the
consumer and where content and services are served. CDNs and commercial
specialized caches (Google, Netflix…) have been helping reduce latency in fixed
networks, by caching content as close as possible to where it is consumed with
the propagation and synchronization of content across Points of Presence
(PoPs). Mobile networks’ equivalent of PoPs are the eNodeB, RNC or cell
aggregation points. These network elements, part of the Radio Access Network
(RAN) are highly proprietary purpose-built platforms to route and manage mobile
radio traffic. Topologically, they are the closest elements mobile users
interact with when they are accessing mobile content. Positioning content and
services there, right at the edge of the network would certainly substantially
reduce latency.
For the
first time, there is an opportunity for network operators to offer OTTs what
they will value most: ultra-low latency, which will translate into a premium
user experience and increased revenue. This will come at a cost, as physical
and virtual real estate at the edge of the network will be scarce. Net
neutrality will not work at the scale of an eNodeB, as commercial law will
dictate the few applications and services providers that will be able to
pre-position their content.
Labels:
5G,
AR,
CDN,
IoT,
latency,
MEC,
mobile broadband,
Monetization,
NFV,
OTT,
SDDC,
SDN,
value chain,
virtualized,
VR
Wednesday, November 4, 2015
What are your intentions with my network?
Over the last few months, there has been much talk about intent rather than prescription in telecom networks connectivity and traffic management. Intent is expressing a desired outcome, whereas prescription is describing the path and actions necessary for that outcome.
For instance, in a video optimization environment, intent can be "I want all users in a cell to be able to stream video to their requested definition, but if the total demand exceeds capacity, I want all videos to downgrade until they can all be simultaneously served".
The current prescriptive model could look more like:
The problem so far is that an intent can be fairly simply expressed but can result in very complex, arduous, iterative prescriptive operations. The complexity is mostly due to the fact that there are many network elements involved in the "stream video" and "demand vs. capacity" operands of that equation and that each element can interpret differently the semantics "exceed" or "downgrade".
ETSI ISG NFV and ONF have included these topics in their workload lately and ONF presented last month at the SDN & OpenFlow world forum where I participated in a panel. ONF is trying to tackle intent-based connectivity in SDN by introducing a virtualizer on the SDN controller.
The virtualizer is a common API that abstracts network-specific elements (type of elements such as router, DPI, gateways... vendors, interface, protocol, physical or virtual...) and translates intents into a modeling language used to program the different network element for the desired outcome. That "translation" requires a flexible and sophisticated rendering engine that holds stateful view of network elements, interfaces, protocols and semantics. The SDN controller would be able to arbitrate resource allocation as it does today but with a natural language programming interface.
ONF started an open source project BOULDER to create an opensource virtualizer initially for OpenDaylight and ONOS controllers.
While this is very early, I believe that virtualizer has vocation to change the balance between network engineers and programmers in mobile networks, provided that it is implemented widely amongst vendors. No doubt, much work will be necessary, as the virtualizer's rendering of natural language towards prescriptive policies looks too much like magic at that point, but the intent is good.
This and more in my "SDN & NFV in wireless networks" report and workshop.
For instance, in a video optimization environment, intent can be "I want all users in a cell to be able to stream video to their requested definition, but if the total demand exceeds capacity, I want all videos to downgrade until they can all be simultaneously served".
The current prescriptive model could look more like:
- Append cell ID to radius / diameter traffic
- Segregate HTTP traffic at the DPI
- Send HTTP to web gateway
- Segregate video traffic at the web gateway
- Send video traffic to video optimization engine
- Detect if video is
- HTTP progressive download or
- HLS or
- Adaptive bit rate or
- other
- Detect video encoding bit rate
- Measure video delivery bit rate
- Aggregate traffic per Cell ID
- If video encoding bit rate exceeds video delivery bit rate in a given cell
- Load corresponding rule from PCRF (diameter Gx interface)
- transcode if progressive download
- transrate if HLS
- Pace / throttle if Adaptive bit rate or other
- until delivery bit rate consistently exceed encoding bit rate for all streams in that cell
The problem so far is that an intent can be fairly simply expressed but can result in very complex, arduous, iterative prescriptive operations. The complexity is mostly due to the fact that there are many network elements involved in the "stream video" and "demand vs. capacity" operands of that equation and that each element can interpret differently the semantics "exceed" or "downgrade".
ETSI ISG NFV and ONF have included these topics in their workload lately and ONF presented last month at the SDN & OpenFlow world forum where I participated in a panel. ONF is trying to tackle intent-based connectivity in SDN by introducing a virtualizer on the SDN controller.
The virtualizer is a common API that abstracts network-specific elements (type of elements such as router, DPI, gateways... vendors, interface, protocol, physical or virtual...) and translates intents into a modeling language used to program the different network element for the desired outcome. That "translation" requires a flexible and sophisticated rendering engine that holds stateful view of network elements, interfaces, protocols and semantics. The SDN controller would be able to arbitrate resource allocation as it does today but with a natural language programming interface.
ONF started an open source project BOULDER to create an opensource virtualizer initially for OpenDaylight and ONOS controllers.
While this is very early, I believe that virtualizer has vocation to change the balance between network engineers and programmers in mobile networks, provided that it is implemented widely amongst vendors. No doubt, much work will be necessary, as the virtualizer's rendering of natural language towards prescriptive policies looks too much like magic at that point, but the intent is good.
This and more in my "SDN & NFV in wireless networks" report and workshop.
Labels:
NFV,
ONF,
ONOS,
opendaylight,
SDN,
virtualized
Monday, October 19, 2015
SDN world 2015: unikernels, compromises and orchestrated obsolescence
Last week's Layer123 SDN and OpenFlow World Congress brought its usual slew of announcements and claims.
From my perspective, I have retained a contrasted experience from the show.
On one hand, it is clear that SDN has now transitioned from proof of concept to commercial trial, if not full commercial deployment and operators are now increasingly understanding the limits of open source initiatives such as OpenStack for carrier-grade deployments. The telling sign is the increasing number of companies specialized in OpenFlow or other protocols high performance hardware based switches.
It feels that Open vSwitch has not hit its stride, notably in term of performance and operators are left with either going open source, cost efficient but not scalable nor performing or compromising with best of breed, hardware-based, hardened switches that offer high performance and scalability but not the agility of software-based implementation yet. What is new, however, is that operators seem ready to compromise for time to market, rather than wait for a possibly more open solution that could - or not - deliver on its promises.
On the NFV front, I feel that many vendors have been forced to lower their silly claims in term of performance, agility and elasticity. It is quite clear that many of them have been called to prove themselves in operators' labs and have failed to deliver. In many cases, vendors are able to demonstrate agility, through VM porting / positioning using either their VNFM or an orchestrator's integration, they are even, in some cases, able to show some level of elasticity with auto-scaling powered by their own EMS, and many have put out press releases with Gbps or Tbps or millions of simultaneous sessions of capacity...
... but few are able to demonstrate all three at the same time, since their performance achievement has, in many cases been relying on SR-IOV to bypass the hypervisor layer, which ties the VM to the CPU in a manner that makes agility and elasticity extremely difficult to achieve.
Operators, here again, seem bound to compromise between performance or agility if they want to accelerate their time to market.
Operators themselves came in troves to show their progress on the subject, but I felt a distinct change in tone in term of their capacity to effectively get vendors deliver on the promises of the NFV successive white papers. One issue lies flatly on the operators' attitude themselves. Many MNO are displaying unrealistic and naive expectations. They say that they are investing in NFV as a means to attain vendor independence but they are unwilling to perform any integration themselves. It is very unlikely that large Telecom Equipment Manufacturer will willingly help deconstruct their value proposition by offering commoditized, plug-and-play, open interfaced virtualized functions.
SDN and NFV integration is still dirty work. Nothing really performs at line rate without optimization, no agility, flexibility, scalability is really attained without fine tuned integration. Operators won't realize the benefits of the technology if they don't get in on the integration work themselves.
At last, what is still missing from my perspective is a service creation strategy that would make use of a virtualized network. Most network operators still mention service agility and time to market as a key driver, but when asked what they would launch if their network was fully virtualized and elastic today, they quote disappointing early examples such as virtual (!?) VPN, security or broadband on demand... timid translations of existing "services" in a virtualized world. I am not sure most of the MNOs realize their competition is not each other but Google, Netflix, Uber, Facebook and others...
By the time they launch free and unlimited voice, data and messaging services underpinned by advertising or sponsored model, it will be quite late to think of new services, even if the network is fully virtualized. It feels like MNOs are orchestrating their own obsolescence.
At last, the latest buzzwords you must have in your presentation this quarter are:
The pet and cattle analogy,
SD WAN,
5G
...and if you haven't yet formulated a strategy with respect to containers (Dockers, etc...) don't bother, they're dead and the next big thing are unikernels. This and more in my latest report and workshop on "SDN NFV in wireless networks 2015 / 2016".
From my perspective, I have retained a contrasted experience from the show.
On one hand, it is clear that SDN has now transitioned from proof of concept to commercial trial, if not full commercial deployment and operators are now increasingly understanding the limits of open source initiatives such as OpenStack for carrier-grade deployments. The telling sign is the increasing number of companies specialized in OpenFlow or other protocols high performance hardware based switches.
It feels that Open vSwitch has not hit its stride, notably in term of performance and operators are left with either going open source, cost efficient but not scalable nor performing or compromising with best of breed, hardware-based, hardened switches that offer high performance and scalability but not the agility of software-based implementation yet. What is new, however, is that operators seem ready to compromise for time to market, rather than wait for a possibly more open solution that could - or not - deliver on its promises.
On the NFV front, I feel that many vendors have been forced to lower their silly claims in term of performance, agility and elasticity. It is quite clear that many of them have been called to prove themselves in operators' labs and have failed to deliver. In many cases, vendors are able to demonstrate agility, through VM porting / positioning using either their VNFM or an orchestrator's integration, they are even, in some cases, able to show some level of elasticity with auto-scaling powered by their own EMS, and many have put out press releases with Gbps or Tbps or millions of simultaneous sessions of capacity...
... but few are able to demonstrate all three at the same time, since their performance achievement has, in many cases been relying on SR-IOV to bypass the hypervisor layer, which ties the VM to the CPU in a manner that makes agility and elasticity extremely difficult to achieve.
Operators, here again, seem bound to compromise between performance or agility if they want to accelerate their time to market.
Operators themselves came in troves to show their progress on the subject, but I felt a distinct change in tone in term of their capacity to effectively get vendors deliver on the promises of the NFV successive white papers. One issue lies flatly on the operators' attitude themselves. Many MNO are displaying unrealistic and naive expectations. They say that they are investing in NFV as a means to attain vendor independence but they are unwilling to perform any integration themselves. It is very unlikely that large Telecom Equipment Manufacturer will willingly help deconstruct their value proposition by offering commoditized, plug-and-play, open interfaced virtualized functions.
SDN and NFV integration is still dirty work. Nothing really performs at line rate without optimization, no agility, flexibility, scalability is really attained without fine tuned integration. Operators won't realize the benefits of the technology if they don't get in on the integration work themselves.
At last, what is still missing from my perspective is a service creation strategy that would make use of a virtualized network. Most network operators still mention service agility and time to market as a key driver, but when asked what they would launch if their network was fully virtualized and elastic today, they quote disappointing early examples such as virtual (!?) VPN, security or broadband on demand... timid translations of existing "services" in a virtualized world. I am not sure most of the MNOs realize their competition is not each other but Google, Netflix, Uber, Facebook and others...
By the time they launch free and unlimited voice, data and messaging services underpinned by advertising or sponsored model, it will be quite late to think of new services, even if the network is fully virtualized. It feels like MNOs are orchestrating their own obsolescence.
At last, the latest buzzwords you must have in your presentation this quarter are:
The pet and cattle analogy,
SD WAN,
5G
...and if you haven't yet formulated a strategy with respect to containers (Dockers, etc...) don't bother, they're dead and the next big thing are unikernels. This and more in my latest report and workshop on "SDN NFV in wireless networks 2015 / 2016".
Thursday, September 24, 2015
SDN-NFV in wireless 2015/2016 is released
As previously announced, I have been working on my new report "SDN-NFV in wireless 2015/2016" and I happy to announce its release.
The report features primary and secondary research on the state of SDN and NFV standards and open source, together with an analysis of the most advanced network operators and solutions vendors in the space.
You can download the table of contents here.
Released September 2015
130 pages
130 pages
- Operators strategy and deployments review: AT&T, China Unicom, Deutsche Telekom, EE, Telecom Italy, Telefonica, ...
- Vendors strategy and roadmap review: Affirmed networks, ALU, Cisco, Ericsson, F5, HP, Huawei, Intel, Juniper, Oracle, Red Hat...
- Drivers for SDN and NFV in telecom networks
- Public, private, hybrid, specialized clouds
- Review of SDN and NFV standards and open source initiatives
- SDN
- Service chaining
- Apache CloudStack, Microsoft Cloud OS, Red Hat, Citrix CloudPlatform, OpenStack, VMWare vCloud,
- SDN controllers (OpenDaylight, ONOS)
- SDN protocols (OpenFlow, NETCONF, ForCES, YANG...)
- NFV
- ETSI ISG NFV
- OPNFV
- OpenMANO
- NFVRG
- MEF LSO
- Hypervisors: VMWare vs. KVM, vs Containers
- How does it all fit together?
- Core and RAN networks NFV roadmap
Terms and conditions: message me at patrick.lopez@coreanalysis.ca
Labels:
ATT,
ETSI,
KVM,
mobile broadband,
NETCONF,
NFV,
ONOS,
opendaylight,
openflow,
OPNFV,
SDN,
Telefonica,
traffic management,
virtualized,
YANG
Thursday, September 10, 2015
What we can learn from ETSI ISG NFV PoCs
This post is extracted from my report SDN - NFV in Wireless.
Last year’s report had a complete review of all ETSI NFV
proof of concepts, their participants, aim and achievements. This year, I
propose a short statistical analysis of the 38 PoCs proposed to date. This
analysis provides some interesting insights on where the NFV challenges stand
today and who are the active participants in their resolution.
- 21 service providers participate in 38 PoCs at ETSI NFV
- 36% of service providers are in EMEA and responsible for 52% of trials, 41% in APAC, responsible for 25% of trials and 23% in North America, responsible for 23% of trials.
Out of 38 PoCs, only 31% have seen an active participation
from one or several operators, the rest of the PoCs have seen operators take a
back seat and either lend their name to the process (at least one operator must
be involved for a PoC to be validated) or provide high level requirements and
feedback. The most active operators have been Deutsche Telekom and NTT, but
only on the first PoCs in 2014. After that operator’s participation
has been spotty, suggesting that those heavily involved at the beginning of the
process have moved on to private PoCs and trials. Since the Q1 2015, 50% of
PoCs see direct operator involvement, ranging from Orchestration, NFVI or VIM
with operators who are mostly new to NFV, suggesting a second wave of service
providers are getting into the fray with a more hands-on approach.
Figure 36: Operators activity in PoC
Out of the 52 operators participating in the 38 Pocs, Telefonica,
AT&T and BT, DT, NTT, Vodafone account for 62% of all PoCs, while other
operators have only been involved in one PoC or are just starting. Telefonica has
been the most active overall, but with all of its involvement in 2014, no new
PoC participation in 2015. AT&T has been involved throughout 2014 and has
only recently restarted a PoC in 2015. British Telecom has been the most
regular since the start of the program with in average close to one PoC per
quarter.
Figure 37: ETSI NFV PoC operators’ participation
On the vendors’ front, 87 vendors and academic institutions
have participated to date to the PoCs, led by HP and Intel (found respectively
in 8% of PoCs). The second tier of participants includes, in descending order, Brocade,
Alcatel Lucent, Huawei, red hat and Cisco, who are represented in between 5 and
3% of the PoCs. Overwhelmingly, in 49% of the cases, vendors participated to
only one PoC.
The most interesting statistics in my mind is showing that
squarely half of the PoCs are using SDN for virtual networking or VIM and the
same proportion (but not necessarily the same PoCs) have deployed a VNF
orchestrator in some form.
Wednesday, August 12, 2015
The orchestrator conundrum in SDN and NFV
We have seen over the last year a flurry of activity around orchestration in SDN and NFV. As I have written about here and here, orchestration is a key element and will likely make or break SDN and NFV success in wireless.
A common mistake associated with orchestration is that it covers the same elements or objectives in SDN and NFV. It is a great issue, because while SDN orchestration is about resource and infrastructure management, NFV should be about service management. There is admittedly a level of overlap, particularly if you define services as both network and customer sets of rules and policies.
To simplify, here we'll say that SDN orchestration is about resource allocation, virtual, physical and mixed infrastructure auditing, insurance and management, while NFV's is about creating rules for traffic and service instantiation based on subscriber, media, origin, destination, etc...
The two orchestration models are complementary (it is harder to create and manage services if you do not have visibility / understanding of available resources and conversely, it can be more efficient to manage resource knowing what services run on them) but not necessarily well integrated. A bevy of standards and open source organizations (ETSI ISG NFV, OPNFV, MEF, Openstack, Opendaylight...) are busy trying to map one with another which is no easy task. SDN orchestration is well defined in term of its purview, less so in term of implementation, but a few models are available to experiment on. NFV is in its infancy, still defining what the elements of service orchestration are, their proposed interfaces with the infrastructure and the VNF and generally speaking how to create a model for service instantiation and management.
For those who have followed this blog and my clients who have attended my SDN and NFV in wireless workshop, it is well known that the management and orchestration (MANO) area is under intense scrutiny from many operators and vendors alike.
Increasingly, infrastructure vendors who are seeing the commoditization of their cash cow understand that the brain of tomorrow's network will be in MANO.
Think of MANO as the network's app store. It controls which apps (VNFs) are instantiated, what level of resource is necessary to manage them and stitch (service chaining) VNF together to create services.
The problem, is that MANO is not yet defined by ETSI, so anyone who wants to orchestrate VNFs today either is building its own or is stuck with the handful of vendors who are providing MANO-like engine. Since MANO is ill-defined, the integration requires a certain level of proprietary effort. Vendors will say that it is all based on open interfaces, but the reality is that there is no mechanism in the standard today for a VNF to declare its capabilities, its needs and its intent, so a MANO integration requires some level of abstraction or deep fine tuning,
As a result, MANO can become very sticky if deployed in an operator network. The VNFs can come and go and vendors can be swapped at will, but the MANO has the potential to be a great anchor point.
It is not a surprise therefore to see vendors investing heavily in this field or acquiring the capabilities:
At the same time, Telefonica has launched an open source collaborative effort called openMANO to stimulate the industry and reduce risks of verticalization of infrastructure / MANO vendors.
For more information on how SDN and NFV are implemented in wireless networks, vendors and operators strategies, look here.
A common mistake associated with orchestration is that it covers the same elements or objectives in SDN and NFV. It is a great issue, because while SDN orchestration is about resource and infrastructure management, NFV should be about service management. There is admittedly a level of overlap, particularly if you define services as both network and customer sets of rules and policies.
To simplify, here we'll say that SDN orchestration is about resource allocation, virtual, physical and mixed infrastructure auditing, insurance and management, while NFV's is about creating rules for traffic and service instantiation based on subscriber, media, origin, destination, etc...
The two orchestration models are complementary (it is harder to create and manage services if you do not have visibility / understanding of available resources and conversely, it can be more efficient to manage resource knowing what services run on them) but not necessarily well integrated. A bevy of standards and open source organizations (ETSI ISG NFV, OPNFV, MEF, Openstack, Opendaylight...) are busy trying to map one with another which is no easy task. SDN orchestration is well defined in term of its purview, less so in term of implementation, but a few models are available to experiment on. NFV is in its infancy, still defining what the elements of service orchestration are, their proposed interfaces with the infrastructure and the VNF and generally speaking how to create a model for service instantiation and management.
For those who have followed this blog and my clients who have attended my SDN and NFV in wireless workshop, it is well known that the management and orchestration (MANO) area is under intense scrutiny from many operators and vendors alike.
Increasingly, infrastructure vendors who are seeing the commoditization of their cash cow understand that the brain of tomorrow's network will be in MANO.
Think of MANO as the network's app store. It controls which apps (VNFs) are instantiated, what level of resource is necessary to manage them and stitch (service chaining) VNF together to create services.
The problem, is that MANO is not yet defined by ETSI, so anyone who wants to orchestrate VNFs today either is building its own or is stuck with the handful of vendors who are providing MANO-like engine. Since MANO is ill-defined, the integration requires a certain level of proprietary effort. Vendors will say that it is all based on open interfaces, but the reality is that there is no mechanism in the standard today for a VNF to declare its capabilities, its needs and its intent, so a MANO integration requires some level of abstraction or deep fine tuning,
As a result, MANO can become very sticky if deployed in an operator network. The VNFs can come and go and vendors can be swapped at will, but the MANO has the potential to be a great anchor point.
It is not a surprise therefore to see vendors investing heavily in this field or acquiring the capabilities:
- Cisco acquired TailF in 2014
- Ciena acquired Cyan this year
- Cenx received 12,5m$ in funding this year...
At the same time, Telefonica has launched an open source collaborative effort called openMANO to stimulate the industry and reduce risks of verticalization of infrastructure / MANO vendors.
For more information on how SDN and NFV are implemented in wireless networks, vendors and operators strategies, look here.
Tuesday, September 30, 2014
NFV & SDN 2014: Executive Summary
This Post is extracted from my report published October 1, 2014.
Cloud and
Software Defined Networking have been technologies explored successively in
academia, IT and enterprise since 2011 and the creation of the Open Networking
Foundation.
They were mostly subjects of interest relegated to science projects
in wireless networks until, in the fall of 2013, a collective of 13 mobile
network operators co-authored a white paper on Network Functions
Virtualization. This white paper became a manifesto and catalyst for the wireless
community and was seminal to the creation of the eponymous ETSI Industry
Standardization Group.
Almost simultaneously, AT&T announced the creation
of a new network architectural vision – Domain 2.0, heavily relying on SDN and
NFV as building blocks for its next generation mobile network.
Today, SDN
and NFV are hot topics in the industry and many companies have started to
position themselves with announcements, trials, products and solutions.
This report is the result of hundreds of
interviews, briefings and meetings with many operators and vendors active in
this field. In the process, I have attended, participated, chaired various
events such as OpenStack, ETSI NFV ISG, SDN & OpenFlow World Congress and
became a member at ETSI, OpenStack and TM Forum.
The Open
Network Foundation, the Linux Foundation, OpenStack, the OpenDaylight project,
IEEE, ETSI, the TM Forum are just a few of the organizations who are involved in
the definition, standardization or facilitation of cloud, SDN and NFV. This
report provides a view on the different organizations contribution and their
progress to date.
Unfortunately,
there is no such thing as SDN-NFV today. These are technologies that have
overlaps and similarities but stand apart widely. Software Defined Network is about
managing network resources. It is an abstraction that allows the definition and
management of IP networks in a new fashion. It separates data from control
plane and allows network resources to be orchestrated and used across
applications independently of their physical location. SDN exhibits a level of
maturity through a variety of contributions to its leading open-source contribution
community, OpenStack. In its ninth release, the architectural framework is well
suited for abstracting cloud resources, but is dominated by enterprise and
general IT interests, with little in term of focus and applicability for
wireless networks.
Network
Function Virtualization is about managing services. It allows the breaking down
and instantiation of software elements into virtualized entities that can be
invoked, assembled, linked and managed to create dynamic services. NFV, by
contrast, through its ETSI standardization group is focused exclusively on
wireless networks but, in the process to release its first standard is still
very incomplete in its architecture, interfaces and implementation.
SDN can or
not comprise NFV elements and NFV can or not be governed or architected using
SDN. Many of the Proof of Concepts (PoC) examined in this document are
attempting to map SDN architecture and NFV functions in the hope to bridge the
gap. Both frameworks can be complementary, but they are both suffering from
growing pains and a diverging set of objectives.
The intent is
to paint a picture of the state of SDN and NFV implementations in mobile
networks. This report describes what has been trialed, deployed in labs,
deployed commercially, what are the elements that are likely to be virtualized
first, what are the timeframes, what are the strategies and the main players.
Tuesday, September 9, 2014
SDN & NFV part VI: Operators, dirty your MANO!
While NFV in ETSI was initially started by network operators in their founding manifesto, in many instances, we see that although there is a strong desire to force telecoms appliance commoditization, there is little appetite by the operators to perform the sophisticated integration necessary for these new systems to work.
This is, for instance, reflected in MANO, where operators seem to have put back the onus on vendors to lead the effort.
Some operators (Telefonica, AT&T, NTT…) seem to invest resources not only in monitoring the process but also in actual development of the technology, but by and large, according to my study, MNOs seem to have taken a passenger seat to NFV implementations efforts. Many vendors note that MNOs tend to have a very hands off approach towards the PoCs they "participate" in, offering guidance, requirements or in some cases, just lending their name to the effort without "getting their hands dirty".
The Orchestrator’s task in NFV is to integrate with OSS/BSS and to manage the lifecycle of the VNFs and NFVI elements.
It onboards new network services and VNFs and it performs service chaining in the sense that it decides through which VNF, in what order must the traffic go through according to routing rules and templates.
These routing rules are called forwarding graphs. Additionally, the Orchestrator performs policy management between VNFs. Since all VNFs are proprietary, integrating them within a framework that allows their components to interact is a huge undertaking. MANO is probably the part of the specification that is the least mature today and requires the most work.
Since it is the brain of the framework, failure of MANO to
reach a level of maturity enabling consensus between the participants of the
ISG will inevitably relegate NFV to vertical implementations. This could lead
to a network with a collection of vertically virtualized elements, each having their own
MANO, or very high level API abstractions, reducing considerably overall system elasticity and programmability. SDN OpenStack-based models can be used for MANO orchestration of resources (Virtualized Infrastructure Manager) but offer little applicability in the pure orchestration and VNF management field beyond the simplest IP routing tasks.
Operators who are serious about NFV in wireless networks should seriously consider develop their own orchestrator or at the minimum implement strict orchestration guidelines. They could force vendors to adopt a minimum set of VNF abstraction templates for service chaining and policy management.
Labels:
ATT,
ETSI,
NFV,
NTT,
openstack,
orchestration,
SDDC,
SDN,
service enablement,
Telefonica,
virtualized
Wednesday, July 2, 2014
SDN & NFV part IV: testing / monitoring in Wireless Networks
One problem I have come across lately is the fact that one of the tenet of NFV and SDN is to reduce potential of vendor lock-in at the hardware level. It is true that virtualization of the software allows commercial off the shelf servers to be used in lieu of appliances, for a fraction of the cost of acquisition and operation.
One of the problem that is emerging is the testing, monitoring, troubleshooting and quality assurance of virtualized networks. Vendors in this field have traditionally relied on passive probes performing traffic interception and analysis at various point of the network / interfaces.
In a SDN/NFV world, it becomes difficult to test / monitor / troubleshoot a single service when the resources associated with the service are mutualized, virtualized and elastic.
Right, now most virtualized functions are at the service / product level, i.e. a vendor takes a product, for instance EPC and virtualizes it and its components. The deployment remains monolithic and while there might be elasticity within the solution, the components themselves cannot be substituted. This result in a multi-vendor environment only as far as the large functions are concerned, but not at the component level.
Monitoring and assuring traffic between components become problematic because of the lack of standardization of East-West interfaces.
Testing, monitoring, QA vendor must virtualize their offering through virtualized software probes and taps implemented as virtual network interface cards (vNICs) or switches, but more importantly must deeply integrate with orchestrators, element managers and controllers in order to be able to monitor the creation, instantiation and growth of virtual machines.
This implementation requires the maintenance of a stateful mapping of network functions and traffic flow in order to correlate data and signalling planes.
At this stage, vendors in this field must prepare themselves for a rather long business development engagement in order to penetrate the ecosystem and integrate with each vendor / solution independently. The effort is not unlike one of orchestrators who need to integrate deeply with each network virtual function vendor in order to accurately understand their respective capabilities and to build the VNF catalogue and lifecycle.
As for many in the NFV space, the commercial strategy must evolve as well towards licensing rather than transaction / volume charging. Virtualized network functions will see the number of elements and vendors grow to 100's and 1000's and inevitably, large system integrators will become key single interface to network operators.
Labels:
NFV,
probe,
QoE,
QoS,
SDN,
traffic management,
virtualized
Tuesday, July 1, 2014
Mobile network 2030

It is summer, nice and warm. England and Italy are out of the world cup, France will beat Germany on Friday, then Brazil and Argentina in the coming weeks to obtain their second FIFA trophy. It sounds like a perfect time for a little daydreaming and telecom fiction...
The date is February 15, 2030
The mobile world congress is a couple of weeks away and has returned to Cannes, as the attendance and indeed the investments in what used to be mobile networks have reduced drastically over the last few years. Finished are the years of opulence and extravagant launches in Barcelona, the show now looks closer to a medium sized textile convention than the great mass of flashy technology and gadgets it used to be in its heyday.
When did it start to devolve? What was the signal that killed what used to be a trillion dollar industry in the 90's and early 2000's. As usual, there is not one cause but a sort of convergence of events that took a momentum that few saw coming and fewer tried to stop.
Net neutrality was certainly one of these events. If you remember, back in 2011, people started to realize the level of penetration fixed and wireless networks were exposed to from legal and illegal interception. Following the various NSA scandals, public pressure mounted to protect digital privacy.
In North America, the battle was fierce between pro and con neutrality, eventually leading to a status quo of sorts, with many content providers and network operators in an uneasy collaborative dynamic. Originally, content providers unwilling to pay for traffic delivery in wireless networks attempted to secure superior user experience by implementing increasingly bandwidth hungry apps. When these started to come in contention for network resources, carriers started to step in and aggressively throttle, cap or otherwise "optimize" traffic. In reaction, premium content providers moved to an encrypted traffic model as a means to obfuscate traffic and prevent interception, mitigation and optimization by carriers. Soon enough, though, the encryption-added costs and latency proved impractical. Furthermore, some carriers started to throttle and cap all traffic equally, claiming to adhere to the letter of net neutrality, which ended up having a terrible effect on user experience. In the end cooler heads prevailed and content providers and carriers created integrated video networks, where transport, encryption and ad insertion were performed at the edge, while targeting, recommendation, fulfillment ended up in the content provider's infrastructure.
In North America, the battle was fierce between pro and con neutrality, eventually leading to a status quo of sorts, with many content providers and network operators in an uneasy collaborative dynamic. Originally, content providers unwilling to pay for traffic delivery in wireless networks attempted to secure superior user experience by implementing increasingly bandwidth hungry apps. When these started to come in contention for network resources, carriers started to step in and aggressively throttle, cap or otherwise "optimize" traffic. In reaction, premium content providers moved to an encrypted traffic model as a means to obfuscate traffic and prevent interception, mitigation and optimization by carriers. Soon enough, though, the encryption-added costs and latency proved impractical. Furthermore, some carriers started to throttle and cap all traffic equally, claiming to adhere to the letter of net neutrality, which ended up having a terrible effect on user experience. In the end cooler heads prevailed and content providers and carriers created integrated video networks, where transport, encryption and ad insertion were performed at the edge, while targeting, recommendation, fulfillment ended up in the content provider's infrastructure.
In Europe, content and service providers saw at the same time "net neutrality" as the perfect excuse to pressure political and regulatory organizations to force network providers to deliver digital content unfiltered, un-prioritized at best possible effort. The result ended up being quite disastrous, as we know, with content being produced mostly outside Europe and encrypted, operators became true utility service providers. They discovered overnight that their pipes could become even dumber than they were.
Of course, the free voice and texting services launched by some of the 5G licensees new entrants in the 2020's accelerated the trend and nationalization of many of the pan European network operator groups.
The transition was relatively easy, since many had transcended to full virtual networks and contracted ALUSSON the last "european" Telecom Equipment Manufacturer to manage their networks. After they had spent collectively over 100 billion euros to virtualize it in the first place, ALUSSON emerged as the only clear winner of the cost benefits brought by virtualization.
Indeed, virtualization was attractive and very cost effective on paper but proved very complex and organizationally intensive to implement in the end. Operators had miscalculated their capacity to shift their workforce from telecom engineering to IT when they found out that the skill-set to manage their networks always had been in the vendors' hands. Few groups were able to massively retool their workforce, if you remember the great telco strikes of 2021-2022.
In the end, most ended up contracting and transitioning their assets to their network vendor. Obviously, liberated from the task of managing their network, most were eager to launch new services, which was one of the initial rationale for virtualization. Unfortunately, they found out that service creation was much better implemented by small, agile, young entrepreneurial structures than large, unionized, middle aged ones... With a couple of notable exceptions, broadband networks were written off as broadband access was written in the European countries' constitutions and networks aggregated at the pan European level to become pure utilities when they were not downright nationalized.
Indeed, virtualization was attractive and very cost effective on paper but proved very complex and organizationally intensive to implement in the end. Operators had miscalculated their capacity to shift their workforce from telecom engineering to IT when they found out that the skill-set to manage their networks always had been in the vendors' hands. Few groups were able to massively retool their workforce, if you remember the great telco strikes of 2021-2022.
In the end, most ended up contracting and transitioning their assets to their network vendor. Obviously, liberated from the task of managing their network, most were eager to launch new services, which was one of the initial rationale for virtualization. Unfortunately, they found out that service creation was much better implemented by small, agile, young entrepreneurial structures than large, unionized, middle aged ones... With a couple of notable exceptions, broadband networks were written off as broadband access was written in the European countries' constitutions and networks aggregated at the pan European level to become pure utilities when they were not downright nationalized.
Outside Europe and North America, Goopple and HuaTE dominate, after voraciously acquiring licenses in emerging countries, ill-equipped to negotiate the long term values of these licenses versus the free network infrastructures these companies provided. The launch of their proprietary SATERR (Satellite Aerial Terrestrial Relay) technology proved instrumental to creating the first fully vertical service /network/ content / device conglomerates.
Few were the operators who have been able to discern the importance of evolving their core asset "enabling communication" into a dominant position in their market. Those who have succeeded share a few common attributes:
They realized first that their business was not about counting calls, bites or texts but enabling communication. They first started to think in term of services and not technology and understood that the key was in service enablement. Understating that services come and go and die in a matter of months in the new economy, they strove not to provide the services but to create the platform to enable them.
In some cases, they transitioned to full advertising, personal digital management agency, harnessing big data and analytics to enrich digital services with presence, location, preference, privacy, corporate awareness. This required much changes organizationally, but as it turned out, marketing analyst were much easier and cost effective to recruit than network and telecom engineers. Network management became the toolset, not the vocation.
In other cases, operators became abstraction layers, enabling content and service providers to better target, advertise, aggregate, obfuscate, disambiguate, contextualize, physical and virtual communication between people and machines.
In all cases they understood that the "value chain" as they used to know it and the consumer need for communication services was better served by an ever changing ecosystem, where there was no "position of strength" and where coopetition was the rule, rather than the exception.
Thursday, June 26, 2014
LTE World Summit 2014
This year's 10th edition of the conference, seems to have found a new level of maturity. While VoLTE, RCS, IMS are still subjects of interest, we seem to be past the hype at last (see last year), with a more pragmatic outlook towards implementation and monetization.
I was happy to see that most operators are now recognizing the importance of managing video experience for monetization. Du UAE's VP of Marketing, Vikram Chadha seems to get it:
"We are transitioning our pricing strategy from bundles and metering to services. We are introducing email, social media, enterprise packages and are looking at separating video from data as a LTE monetization strategy."
As a result, the keynotes were more prosaic than in the past editions, focusing on cost of spectrum acquisitions and regulatory pressure in the European Union preventing operators to mount any defensible position against the OTT assault on their networks. Much of the agenda of the show focused on pragmatic subjects such as roaming, pricing, policy management, heterogeneous networks and wifi/cellular handover. Nothing obviously earth shattering on these subjects, but steady progress, as the technologies transition from lab to commercial trials and deployment.
As an example, there was a great presentation by Bouygues Telecom's EVP of Strategy Frederic Ruciak highlighting the company's strategy for the launch of LTE in France, A very competitive market, and how the company was able to achieve the number one spot in LTE market share, despite being the "challenger" number 3 in 2 and 3G.
As an example, there was a great presentation by Bouygues Telecom's EVP of Strategy Frederic Ruciak highlighting the company's strategy for the launch of LTE in France, A very competitive market, and how the company was able to achieve the number one spot in LTE market share, despite being the "challenger" number 3 in 2 and 3G.
The next buzzword on the hype cycle to point its head is NFV with many operator CTOs publicly hailing the new technology as the magic bullet that will allow them to "launch services in days or weeks rather than years". I am getting quite tired of hearing that rationalization as an excuse for the multimillion investments made in this space, especially when no one seems to know what these new services will be. Right now, the only arguable benefit is on capex cost containment and I have seen little evidence that it will pass this stage in the mid term. Like the teenage sex joke, no one seems to know what it is, but everybody claims to be doing it.
There is still much to be resolved on this matter and that discussion will continue for some time. The interesting new positioning I heard at the show is appliance vendors referring to their offering as PNF (as in physical) in contrast and as enablers for VNF. Although it sounds like a marketing trick, it makes a lot of sense for vendors to illustrate how NFV inserts itself in a legacy network, leading inevitably to a hybrid network architecture.
The consensus here seems to be that there are two prevailing strategies for introduction of virtualized network functions.
- The first one, "cap and grow" sees existing infrastructure equipments being capped beyond a certain capacity and little by little complemented by virtualized functions, allowing incremental traffic to find its way on the virtualized infrastructure. A variant might be "cap and burst" where a function subject to bursts traffic is dimensioned on physical assets to the mean peak traffic and all exceeding traffic is diverted to a virtualized function.
- The second seems to favour the creation of vertical virtualized networks for market or traffic segments that are greenfield. M2M and VoLTE being the most cited examples.
Both strategies have advantages and flaws that I am exploring in my upcoming report on "NFV & virtualization in mobile networks 2014". Contact me for more information.
Labels:
business case,
Capex,
cost containment,
data cap,
IMS,
load balancer,
LTE,
M2M,
mass market,
mobile broadband,
Monetization,
NFV,
openstack,
OTT,
policy enforcement,
RCS,
virtualized,
VoLTE
Subscribe to:
Posts (Atom)