Telecom TV interview at Networks Functions Virtualization World Congress
Showing posts with label SDDC. Show all posts
Showing posts with label SDDC. Show all posts
Wednesday, May 24, 2017
Thursday, May 11, 2017
Customer Centric Networks and SDN NFV
These slides were presented in May 2017 at the NFV World Congress in San Jose.
They illustrate how we are looking at deploying cloud microservices at the edge of our networks to provide unique experiences using SDN, open source and NFV.
They illustrate how we are looking at deploying cloud microservices at the edge of our networks to provide unique experiences using SDN, open source and NFV.
Labels:
CORD,
NFV,
open source,
SDDC,
SDN,
Telefonica
Monday, April 10, 2017
Telefonica's innovation framework
I have received many requests over the last months to explain in more details our innovation process. Now that our innovation methodology is a widely commented Harvard Business Review Case Study, I thought it was a good time to shed some light on how a large telco such as Telefonica can innovate in a fast paced environment.
Innovation is not only a decision, it's a process, a methodology. In our case we have different teams looking after external innovation, through business ventures and venture capital and internal looking after networks, data, and moonshots. The teams that I support, focusing on networks innovation are adapting the lean elephant methodology to invent tomorrow's mobile, fixed and TV networks.
Innovation is not only a decision, it's a process, a methodology. In our case we have different teams looking after external innovation, through business ventures and venture capital and internal looking after networks, data, and moonshots. The teams that I support, focusing on networks innovation are adapting the lean elephant methodology to invent tomorrow's mobile, fixed and TV networks.
Ideation
The process starts with directed ideation, informed by our corporate customer segmentation, customer sentiment studies and selected themes. An innovation call centered around specific themes such as "imagine tomorrow's TV" or "Artificial intelligence and networks QoE" is launched across the group, with local briefings including our selection parameters. A jury is convened to review the hundreds of ideas and shortlist the most interesting. The selected intrapreneurs have a month to prepare a formal pitch for their ideas. They are assisted by customer experience specialists who help them refine the problem they seek to resolve, its applicability and market appeal.
Feasibility
After the pitch and selection, the intrapreneurs are transitioned to the innovation team full time and given a few weeks to create a feasibility plan and preliminary resource budget for prototyping. Once ready, the successful applicants present the plan in details to the jury.
Prototyping
The lucky few that pass this gate are given 3 to 8 months to prototype their project, together with commensurate resource. At this stage, the project must have strong internal sponsorship, with verticals or markets within Telefonica who are committing to take the prototype in their labs for functional testing. The resulting prototype, together with the value proposition and addressable market are reviewed before passing to the next phase.
Market trial
The prototype is then hardened and deployed in a commercial network for friendly and limited A/B testing and refinement. This phase can last 2 to 6 months, with increasing number of users and sophistication in measurement of the value proposition's effectiveness. During this phase as well, a full product / service business case is finalized, using the data collected during the market trial.
Productization and transfer
The project meets customer needs? It is innovative and provides differentiation? It is profitable and Telefonica has an unfair advantage in solving real market problems? These are some of the tough questions the intrapreneur and his team must be able to answer before the solution can be productized and eventually transferred to one of our verticals or to create a new one.
This process has been the source of Telefonica's early advances in IoT, big data, smart cities... It has also killed, merged, pivoted and spun off hundreds of projects. The network innovations teams I support are aiming at radically changing networks topology, deployment and value chain using software defined networks, virtualization, containerization and lambda computing all the way to the edge of our networks. We are developers, network hackers, user experience experts, computer scientists, devops engineers,....
The next months will see some exciting announcements on this. Stay tuned.
You can catch me and we can chat about it at the upcoming NFV world congress or TM Forum live.
Labels:
containers,
lambda,
NFV,
SDDC,
SDN,
Telefonica
Wednesday, January 11, 2017
Innovation and transformation, micro segments and strands
When I first met the CEO of Telefonica Research and Development, David
Del Val, he asked me what I thought of the direction the industry was taking.
I have not been shy on this blog and other public forum about my opinion on operators' lack of innovation and transformation. My comments went
something like that:
"I think that in a time very soon, I don´t know if it´s going to be in 3 years, 5 or 10, voice will be free, texts will be free, data will be free or as close to a monthly utility price as you can think. Already, countries are writing access to broadband in their citizens´ fundamental rights. Most operators are talking about innovation and new services, but let´s face it, they have had a pretty poor track record. MMS was to be the killer app for GPRS/EDGE, push to talk for 3G,video calling for HSPA, VoLTE for 4G... There is no shame in being an operator of a very good, solid, inexpensive connectivity service. Some companies are very successful doing that and there will be more in the future. But you don't need hundreds of thousands of people for that. If operators' ambition is to "monetize", "launch new services", "open new revenue streams", "innovate", they have to transform first. And it's gonna hurt."
At that
point, I wasn't sure I had made the best first impression, but as you know now, that discussion ended up turning into a full time collaboration.
The
industry is undergoing changes that will accelerate and break companies that
are not adaptable or capable of rethinking their approach.
4G wasn’t
designed as a video network capable of doing other things like browsing and
voice; the telecoms industry designed 4G to be a multipurpose mobile broadband
network, capable of carrying VoIP, browsing, messaging, … but really, it
wasn’t so hard to see that video would be the dominant part of traffic and cost
and growing. I don´t have a crystal ball but I had identified
publicly the problem more than 7 years ago.
The
industry’s failure to realize this has led us in a situation where we have not
engaged video providers early enough to create a mutually profitable business
model. The result is traffic is increasing dramatically across all networks,
while revenues are stagnating or decreasing because video services are mostly
encrypted. At the same time, our traditional revenues from voice and messaging
are eroded by other providers.
As the
industry is gearing up towards 5G and we start swimming in massive MIMO,
beam-forming, edge computing, millimeter wave, IoT, drone and autonomous
vehicles, I think it is wise to understand what it will take to really deliver on these promises.
Agile,
lean, smart, open, software-defined, self organizing, autoscalable,
virtualized, deep learning, DevOps, orchestrated, open-source... my head hurts
from all the trappings of 2016´s trendy telco hipster lingo.
This is
not going to get better in 2017.
The
pressure to generate new revenues and to decrease costs drastically will
dramatically increase on operators. There are opportunities to create new
revenue streams (fintech, premium video, IoT…) or reduce costs (SDN, NFV,
DevOps, Open source…) but they require initial investments that are unsure from
a business case perspective because they are unproven. We are only starting to
see operators who have made these investments over the last 3 years announcing
results now. These investments are hard to make for any operator, because they
are not following our traditional model. Operators for the last 20 years have
been conditioned to work in standards to invent the future collectively and
then buy technology solutions from large vendors. The key for that model was
not innovation, it was sustainability, interoperability.
The
internet has broken that model.
·
I think
that operators who want to be more than a bit pipe provider need to create
unique experiences for consumers, enterprises, verticals and things. Unique
experiences can only be generated from context (understanding the customer, his
desire, intent, capacity, limitations...), adaptation (we don't need slices, we
need strands) and control (end to end performance, QoS and QoE per strand).
Micro segmentation has technical, but more importantly operational and
organizational impacts.
Operators
can't hope to control, adapt, contextualize and innovate if they can't control
their network. Today, many have progressively vacated the field of engineering
to be network administrators, writing RFPs to select vendors, or better,
mandate integrators to select and deploy solutions. The result is networks that
are very undifferentiated, where a potential "innovation" from one
can be rolled out by another with a purchase order, where a change in a tariff,
a new enterprise customer on-boarding, a new service takes years to deploy, hundreds
of people, and millions of euros.
Most operators can't launch a service if it has less than 10 million people addressable market, or it won't make the business case, right off the bat.
Most operators can't launch a service if it has less than 10 million people addressable market, or it won't make the business case, right off the bat.
There are
solutions, though, but they are tough medicine. You can't really rip the rewards of
SDN or NFV if you don't control their implementation. It's useless to have a
programmable network, if you can't program. Large integrators and vendors have
made the effort to retool, hire and train. Operators must do the same unless
they want to be MVNOs on their own networks.
Innovation is trying.
Projects can fail, technology evolves, but transformation is sustainable.
Thursday, May 5, 2016
MEC: The 7B$ opportunity
Extracted from Mobile Edge Computing 2016.
Table of contents
Defining an addressable market for an emerging product or technology is always an interesting challenge. On one hand, you have to evaluate the problems the technology solves and their value to the market, and on the other hand, appreciate the possible cost structure and psychological price expectations from the potential buyer / users.
This warrants a top down and bottoms up approach to look at how the technology can contribute or substitute some current radio and core networks spending, together with a cost based review of the potential physical and virtual infrastructure. [...]
The cost analysis is comparatively easy, as it relies on well understood current cost structure for physical hardware and virtual functions. The assumptions surrounding the costs of the hardware has been reviewed with main x86 based hardware vendors. The VNFs pricing relies on discussions with large and emerging telecom equipment vendors for standard VNFs such as EPC, IMS, encoding, load balancers, DPI… price structure. Traditional telco professional services, maintenance and support costs are apportioned and included in the calculations.
The overall assumption is that MEC will become part of the fabric of 5G networks and that MEC equipment will cover up to 20% of a network (coverage or population) when fully deployed.
The report features total addressable market, cumulative and incremental for MEC equipment vendors and integrator, broken down by CAPEX / OPEX, consumer, enterprises and IoT services.
It then provides a review of operators opportunities and revenue model for each segment.
Defining an addressable market for an emerging product or technology is always an interesting challenge. On one hand, you have to evaluate the problems the technology solves and their value to the market, and on the other hand, appreciate the possible cost structure and psychological price expectations from the potential buyer / users.
This warrants a top down and bottoms up approach to look at how the technology can contribute or substitute some current radio and core networks spending, together with a cost based review of the potential physical and virtual infrastructure. [...]
The cost analysis is comparatively easy, as it relies on well understood current cost structure for physical hardware and virtual functions. The assumptions surrounding the costs of the hardware has been reviewed with main x86 based hardware vendors. The VNFs pricing relies on discussions with large and emerging telecom equipment vendors for standard VNFs such as EPC, IMS, encoding, load balancers, DPI… price structure. Traditional telco professional services, maintenance and support costs are apportioned and included in the calculations.
The overall assumption is that MEC will become part of the fabric of 5G networks and that MEC equipment will cover up to 20% of a network (coverage or population) when fully deployed.
The report features total addressable market, cumulative and incremental for MEC equipment vendors and integrator, broken down by CAPEX / OPEX, consumer, enterprises and IoT services.
It then provides a review of operators opportunities and revenue model for each segment.
Monday, April 25, 2016
Mobile Edge Computing 2016 is released!
5G networks will bring extreme data speed and ultra low latency to enable Internet of Things, autonomous vehicles, augmented, mixed and virtual reality and countless new services.
Mobile Edge Computing is an important technology that will enable and accelerate key use cases while creating a collaborative framework for content providers, content delivery networks and network operators.
Learn how mobile operators, CDNs, OTTs and vendors are redefining cellular access and services.
Mobile Edge Computing is an important technology that will enable and accelerate key use cases while creating a collaborative framework for content providers, content delivery networks and network operators.
Learn how mobile operators, CDNs, OTTs and vendors are redefining cellular access and services.
This 70 pages report reviews in detail what Mobile Edge Computing is, who the main actors are and how this potential multi billion dollar technology can change how OTTs, operators, enterprises and machines can enable innovative and enhanced services.
Providing an in-depth analysis of the technology, the architecture, the vendors's strategies and 17 use cases, this first industry report outlines the technology potential and addressable market from a vendor, service provider and operator's perspective.
Table of contents, executive summary can be downloaded here.
Table of contents, executive summary can be downloaded here.
Monday, April 4, 2016
MEC 2016 Executive Summary
2016 sees a sea change in the fabric of the
mobile value chain. Google is reporting that mobile search revenue now exceed
desktop, whereas 47% of Facebook members are now exclusively on mobile, which
generates 78% of the company’s revenue. It has taken time, but most OTT
services that were initially geared towards the internet are rapidly
transitioning towards mobile.
The impact is still to be felt across the
value chain.
OTT providers have a fundamentally different
view of services and value different things than mobile network operators. While
mobile networks have been built on the premises of coverage, reliability and
ubiquitous access to metered network-based services, OTT rely on free,
freemium, ad-sponsored or subscription based services where fast access and
speed are paramount. Increase in latency impacts page load, search time and can
cost OTTs billions in revenue.
The reconciliation of these views and the
emergence of a new coherent business model will be painful but necessary and
will lead to new network architectures.
Traditional mobile networks were originally
designed to deliver content and services that were hosted on the network
itself. The first mobile data applications (WAP, multimedia messaging…) were
deployed in the core network, as a means to be both as close as possible to the
user but also centralized to avoid replication and synchronization issues.
3G and 4G Networks still bear the design
associated with this antiquated distribution model. As technology and user
behaviours have evolved, a large majority of content and services accessed on
cellular networks today originate outside the mobile network. Although content
is now stored and accessed from clouds, caches CDNs and the internet, a mobile
user still has to go through the internet, the core network, the backhaul and
the radio network to get to it. Each of these steps sees a substantial decrease
in throughput capacity, from 100's of Gbps down to Mbps or less. Additionally, each hop
adds latency to the process. This is why networks continue to invest in
increasing throughput and capacity. Streaming a large video or downloading a
large file from a cloud or the internet is a little bit like trying to suck ice
cream with a 3-foot bending straw.
Throughput and capacity seem to be
certainly tremendously growing with the promises of 5G networks, but latency
remains an issue. Reducing latency requires reducing distance between the
consumer and where content and services are served. CDNs and commercial
specialized caches (Google, Netflix…) have been helping reduce latency in fixed
networks, by caching content as close as possible to where it is consumed with
the propagation and synchronization of content across Points of Presence
(PoPs). Mobile networks’ equivalent of PoPs are the eNodeB, RNC or cell
aggregation points. These network elements, part of the Radio Access Network
(RAN) are highly proprietary purpose-built platforms to route and manage mobile
radio traffic. Topologically, they are the closest elements mobile users
interact with when they are accessing mobile content. Positioning content and
services there, right at the edge of the network would certainly substantially
reduce latency.
For the
first time, there is an opportunity for network operators to offer OTTs what
they will value most: ultra-low latency, which will translate into a premium
user experience and increased revenue. This will come at a cost, as physical
and virtual real estate at the edge of the network will be scarce. Net
neutrality will not work at the scale of an eNodeB, as commercial law will
dictate the few applications and services providers that will be able to
pre-position their content.
Labels:
5G,
AR,
CDN,
IoT,
latency,
MEC,
mobile broadband,
Monetization,
NFV,
OTT,
SDDC,
SDN,
value chain,
virtualized,
VR
Tuesday, March 15, 2016
Mobile QoE White Paper
Extracted from the white paper "Mobile Networks QoE" commissioned by Accedian Networks.
2016 is an interesting year in mobile
networks. Maybe for the first time, we
are seeing tangible signs of evolution from digital services to mobile-first.
As it was the case for the transition from traditional services to digital,
this evolution causes disruptions and new behavior patterns in the ecosystem,
from users to networks, to service providers.
Take for example social networks. 47% of
Facebook users access the service exclusively through mobile and generate 78%
of the company’s ad revenue. In video streaming services, YouTube sees 50% of
its views on mobile devices and 49% Netflix’ 18 to 34 years old demographics
watch it on mobile.
This extraordinary change in behavior
causes unabated traffic growth on mobile networks as well a changes in the
traffic mix. Video becomes the dominant use that pervades every other aspect of
the network. Indeed, all involved in the mobile value chain have identified
video services as the most promising revenue opportunity for next generation
networks. Video services are rapidly becoming the new gold rush.
“Video services are the new gold rush”
|
Video requires specialized equipment
to manage and guarantee its quality in the network, otherwise, when congestion
occurs, there is a risk that it consumes resources effectively denying voice,
browsing, email and other services fair (and necessary) access to the network.
This unpredictable traffic growth results
in exponential costs for networks to serve the demand.
As mobile becomes the preferred
medium to consume digital content and services, Mobile Network Operators (MNOs),
whose revenue was traditionally derived from selling “transport,” see their
share squeezed as subscribers increasingly value content and have more and more
options in accessing it. The double effect of the MNOs’ decreasing margins and increasing
costs forces them to rethink their network architecture.
New services, on the horizon such as
Voice and Video over LTE (VoLTE & ViLTE), augmented and virtual reality,
wearable and IoT, automotive and M2M will not be achievable technologically or
economically with the current networks.
Any architecture shift must not
simply increase capacity; it must also improve the user experience. It must
give the MNO granular control over how services are created, delivered,
monitored, and optimized. It must make best use of capacity in each situation,
to put the network at the service of the subscriber. It must make QoE — the
single biggest differentiator within their control — the foundation for network
control, revenue growth and subscriber loyalty.
By offering exceptional user
experience, MNOs can become the access provider of choice, part of their users
continuously connected lives as their trusted curator of apps, real-time
communications, and video.
“How to build massively scalable networks while
guaranteeing Quality of Experience?”
|
QoE
is rapidly becoming the major battlefield upon which network operators and
content providers will differentiate and win consumers’ trust. Quality of Experience requires a richly
instrumented network, with feedback telemetry woven through its fabric to
anticipate, detect, measure any potential failure.
Tuesday, March 1, 2016
Mobile World Congress 16 hype curve
Mobile World Congress 2016 was an interesting show in many aspects. Here are some of my views on most and least hyped subjects, including mobile video, NFV, SDN, IoT, M2M, augmented and virtual reality, TCP optimization, VoLTE and others
First, let start with mobile video, my pet subject, as some of you might know. 2016 sees half of Facebook users to be exclusively mobile, generating over 3/4 of the company's revenue while half of YouTube views are on mobile devices and nearly half of Netflix under 34 members watch from a mobile device. There is mobile and mobile, though and a good 2/3 of these views occur on wifi. Still, internet video service providers see themselves becoming mobile companies faster than they thought. The result is increased pressure on mobile networks to provide fast, reliable video services, as 2k, 4K, 360 degrees video, augmented and virtual reality are next on the list of services to appear. This continues to create distortions to the value chain as encryption, ad blocking, privacy, security, net neutrality, traffic pacing and prioritization are being used as weapons of slow attrition by traditional and new content and service providers. On the network operators' side, many have deserted the video monetization battlefield. T-Mobile's Binge On seems to give MNOs pause for reflection on alternative models for video services cooperation. TCP optimization has been running hot as a technology for the last 18 months and has seen Teclo Networks acquired by Sandvine on the heels of this year's congress.
Certainly, I have felt that we have seen a change of pace and tone in many announcements, with NFV hyperbolic claims subsiding somewhat compared to last year. Specifically, we have seen several vendors live deployments, but mostly revolving around launches of VoLTE, virtualized EPC for MVNOs, enterprise or verticals and ubiquitous virtualized CPE but still little in term of multi-vendor generic traffic NFV deployments at scale. Talking about VoLTE, I now have several anecdotal evidence from Europe, Asia and North America that the services commercially launched are well below expectation in term of quality an performance against circuit switched voice.
The lack of maturity of standards for Orchestration is certainly the chief culprit here, hindering progress for open multi vendor service automation.
Proof can be found in the flurry of vendors "ecosystems". If everyone works so hard to be in one and each have their own, it underlines the market fragmentation rather than reduces it.
An interesting announcement showed Telefonica, BT, Korea Telecom, Telekom Austria, SK, Sprint, and several vendors taking a sheet from OPNFV's playbook and creating probably one of the first open-source project within ETSI, aimed at delivering a MANO collaborative project,.
I have been advocating for such a project for more than 18 months, so I certainly welcome the initiative, even if ETSI might not feel like the most natural place for an open source project.
Overall, NFV feels more mature, but still very much disconnected from reality. A solution looking for problems to solve, with little in term of new services creation. If all the hoopla leads to cloud-based VPNs, VoLTE and cheaper packet core infrastructure, the business case remains fragile.
The SDN announcements were somewhat muted, but showing good progress in SD-WAN, and SD data center architecture with the recognition, at last, that specialized switches will likely still be necessary in the short to medium term if we want high performance software defined fabric - even if it impacts agility. The compromises are sign of market maturing, not a failure to deliver on the vendors part in my opinion.
IoT, M2M were still ubiquitous and vague, depicted alternatively as next big thing or already here. The market fragmentation in term of standards, technology, use cases and understanding leads to baseless fantasist claims from many vendors (and operators) on the future of wearable, autonomous transports, connected objects... with little in term of evidence of a coherent ecosystem formation. It is likely that a dominant player will emerge and provide a top-down approach, but the business case seems to hinge on killer-apps that hint a next generation networks to be fulfilled.
5G was on many vendors' lips as well, even if it seems to consistently mean different things to different people, including MIMO, beam forming, virtualized RAN... What was clear, from my perspective was that operators were ready at last to address latency (as opposed or in complement of bandwidth) as a key resource and attribute to discriminate services and associated network slices.
Big Data slid right down the hype curve this year, with very little in term of announcement or even reference in vendors product launches or deployments. It now seems granted that any piece of network equipment, physical or virtual must generate rivulets that stream to rivers and data lakes, to be avidly aggregated, correlated by machine learning algorithms to provide actionable insights in the form of analytics and alerts. Vendors show progress in reporting, but true multi vendors holistic analytics remains extremely difficult, due to the fragmentation of vendors data attributes and the necessity to have both data scientists and subject matter experts working together to discriminate actionable insights from false positives.
On the services side, augmented and virtual reality were revving up to the next hype phase with a multitude of attendees walking blindly with googles and smartphones stuck to their face... not the smartest look and unlikely to pass novelty stage until integrated in less obtrusive displays. On the AR front, convincing use cases start to emerge, such as furniture shopping (whereas you can see and position furniture in your home by superimposing them from a catalogue app), that are pragmatic and useful without being too cumbersome. Anyone who had to shop for furniture and send it back because it did not fit or the color wasn't really the same as the room will understand.
Ad blocking certainly became a subject of increased interest, as operators and service providers are still struggling for dominance. As encrypted data traffic increases, operators start to explore ways to provide services that users see as valuable and if they hurt some of the OTTs business models, it is certainly an additional bargaining chip. The melding and reforming of the mobile value chain continues and accelerates with increased competition, collaboration and coopetition as MNOs and OTTs are finding a settling position. I have recently ranted about what's wrong with the mobile value chain, so I will spare you here.
At last, my personal interest project this year revolves around Mobile Edge Computing. I have started production on a report on the subject. I think the technology has potential unlock many new services in mobile networks and I can't wait to tell you more about it. Stay tuned for more!
First, let start with mobile video, my pet subject, as some of you might know. 2016 sees half of Facebook users to be exclusively mobile, generating over 3/4 of the company's revenue while half of YouTube views are on mobile devices and nearly half of Netflix under 34 members watch from a mobile device. There is mobile and mobile, though and a good 2/3 of these views occur on wifi. Still, internet video service providers see themselves becoming mobile companies faster than they thought. The result is increased pressure on mobile networks to provide fast, reliable video services, as 2k, 4K, 360 degrees video, augmented and virtual reality are next on the list of services to appear. This continues to create distortions to the value chain as encryption, ad blocking, privacy, security, net neutrality, traffic pacing and prioritization are being used as weapons of slow attrition by traditional and new content and service providers. On the network operators' side, many have deserted the video monetization battlefield. T-Mobile's Binge On seems to give MNOs pause for reflection on alternative models for video services cooperation. TCP optimization has been running hot as a technology for the last 18 months and has seen Teclo Networks acquired by Sandvine on the heels of this year's congress.
Certainly, I have felt that we have seen a change of pace and tone in many announcements, with NFV hyperbolic claims subsiding somewhat compared to last year. Specifically, we have seen several vendors live deployments, but mostly revolving around launches of VoLTE, virtualized EPC for MVNOs, enterprise or verticals and ubiquitous virtualized CPE but still little in term of multi-vendor generic traffic NFV deployments at scale. Talking about VoLTE, I now have several anecdotal evidence from Europe, Asia and North America that the services commercially launched are well below expectation in term of quality an performance against circuit switched voice.
The lack of maturity of standards for Orchestration is certainly the chief culprit here, hindering progress for open multi vendor service automation.
Proof can be found in the flurry of vendors "ecosystems". If everyone works so hard to be in one and each have their own, it underlines the market fragmentation rather than reduces it.
An interesting announcement showed Telefonica, BT, Korea Telecom, Telekom Austria, SK, Sprint, and several vendors taking a sheet from OPNFV's playbook and creating probably one of the first open-source project within ETSI, aimed at delivering a MANO collaborative project,.
I have been advocating for such a project for more than 18 months, so I certainly welcome the initiative, even if ETSI might not feel like the most natural place for an open source project.
Overall, NFV feels more mature, but still very much disconnected from reality. A solution looking for problems to solve, with little in term of new services creation. If all the hoopla leads to cloud-based VPNs, VoLTE and cheaper packet core infrastructure, the business case remains fragile.
The SDN announcements were somewhat muted, but showing good progress in SD-WAN, and SD data center architecture with the recognition, at last, that specialized switches will likely still be necessary in the short to medium term if we want high performance software defined fabric - even if it impacts agility. The compromises are sign of market maturing, not a failure to deliver on the vendors part in my opinion.
IoT, M2M were still ubiquitous and vague, depicted alternatively as next big thing or already here. The market fragmentation in term of standards, technology, use cases and understanding leads to baseless fantasist claims from many vendors (and operators) on the future of wearable, autonomous transports, connected objects... with little in term of evidence of a coherent ecosystem formation. It is likely that a dominant player will emerge and provide a top-down approach, but the business case seems to hinge on killer-apps that hint a next generation networks to be fulfilled.
5G was on many vendors' lips as well, even if it seems to consistently mean different things to different people, including MIMO, beam forming, virtualized RAN... What was clear, from my perspective was that operators were ready at last to address latency (as opposed or in complement of bandwidth) as a key resource and attribute to discriminate services and associated network slices.
Big Data slid right down the hype curve this year, with very little in term of announcement or even reference in vendors product launches or deployments. It now seems granted that any piece of network equipment, physical or virtual must generate rivulets that stream to rivers and data lakes, to be avidly aggregated, correlated by machine learning algorithms to provide actionable insights in the form of analytics and alerts. Vendors show progress in reporting, but true multi vendors holistic analytics remains extremely difficult, due to the fragmentation of vendors data attributes and the necessity to have both data scientists and subject matter experts working together to discriminate actionable insights from false positives.
On the services side, augmented and virtual reality were revving up to the next hype phase with a multitude of attendees walking blindly with googles and smartphones stuck to their face... not the smartest look and unlikely to pass novelty stage until integrated in less obtrusive displays. On the AR front, convincing use cases start to emerge, such as furniture shopping (whereas you can see and position furniture in your home by superimposing them from a catalogue app), that are pragmatic and useful without being too cumbersome. Anyone who had to shop for furniture and send it back because it did not fit or the color wasn't really the same as the room will understand.
Ad blocking certainly became a subject of increased interest, as operators and service providers are still struggling for dominance. As encrypted data traffic increases, operators start to explore ways to provide services that users see as valuable and if they hurt some of the OTTs business models, it is certainly an additional bargaining chip. The melding and reforming of the mobile value chain continues and accelerates with increased competition, collaboration and coopetition as MNOs and OTTs are finding a settling position. I have recently ranted about what's wrong with the mobile value chain, so I will spare you here.
At last, my personal interest project this year revolves around Mobile Edge Computing. I have started production on a report on the subject. I think the technology has potential unlock many new services in mobile networks and I can't wait to tell you more about it. Stay tuned for more!
Labels:
5G,
AR,
big data,
IoT,
M2M,
MEC,
mobile video,
NFV,
orchestration,
SDDC,
SDN,
TCP Optimization,
VoLTE,
VR
Tuesday, September 30, 2014
NFV & SDN 2014: Executive Summary
This Post is extracted from my report published October 1, 2014.
Cloud and
Software Defined Networking have been technologies explored successively in
academia, IT and enterprise since 2011 and the creation of the Open Networking
Foundation.
They were mostly subjects of interest relegated to science projects
in wireless networks until, in the fall of 2013, a collective of 13 mobile
network operators co-authored a white paper on Network Functions
Virtualization. This white paper became a manifesto and catalyst for the wireless
community and was seminal to the creation of the eponymous ETSI Industry
Standardization Group.
Almost simultaneously, AT&T announced the creation
of a new network architectural vision – Domain 2.0, heavily relying on SDN and
NFV as building blocks for its next generation mobile network.
Today, SDN
and NFV are hot topics in the industry and many companies have started to
position themselves with announcements, trials, products and solutions.
This report is the result of hundreds of
interviews, briefings and meetings with many operators and vendors active in
this field. In the process, I have attended, participated, chaired various
events such as OpenStack, ETSI NFV ISG, SDN & OpenFlow World Congress and
became a member at ETSI, OpenStack and TM Forum.
The Open
Network Foundation, the Linux Foundation, OpenStack, the OpenDaylight project,
IEEE, ETSI, the TM Forum are just a few of the organizations who are involved in
the definition, standardization or facilitation of cloud, SDN and NFV. This
report provides a view on the different organizations contribution and their
progress to date.
Unfortunately,
there is no such thing as SDN-NFV today. These are technologies that have
overlaps and similarities but stand apart widely. Software Defined Network is about
managing network resources. It is an abstraction that allows the definition and
management of IP networks in a new fashion. It separates data from control
plane and allows network resources to be orchestrated and used across
applications independently of their physical location. SDN exhibits a level of
maturity through a variety of contributions to its leading open-source contribution
community, OpenStack. In its ninth release, the architectural framework is well
suited for abstracting cloud resources, but is dominated by enterprise and
general IT interests, with little in term of focus and applicability for
wireless networks.
Network
Function Virtualization is about managing services. It allows the breaking down
and instantiation of software elements into virtualized entities that can be
invoked, assembled, linked and managed to create dynamic services. NFV, by
contrast, through its ETSI standardization group is focused exclusively on
wireless networks but, in the process to release its first standard is still
very incomplete in its architecture, interfaces and implementation.
SDN can or
not comprise NFV elements and NFV can or not be governed or architected using
SDN. Many of the Proof of Concepts (PoC) examined in this document are
attempting to map SDN architecture and NFV functions in the hope to bridge the
gap. Both frameworks can be complementary, but they are both suffering from
growing pains and a diverging set of objectives.
The intent is
to paint a picture of the state of SDN and NFV implementations in mobile
networks. This report describes what has been trialed, deployed in labs,
deployed commercially, what are the elements that are likely to be virtualized
first, what are the timeframes, what are the strategies and the main players.
Tuesday, September 9, 2014
SDN & NFV part VI: Operators, dirty your MANO!
While NFV in ETSI was initially started by network operators in their founding manifesto, in many instances, we see that although there is a strong desire to force telecoms appliance commoditization, there is little appetite by the operators to perform the sophisticated integration necessary for these new systems to work.
This is, for instance, reflected in MANO, where operators seem to have put back the onus on vendors to lead the effort.
Some operators (Telefonica, AT&T, NTT…) seem to invest resources not only in monitoring the process but also in actual development of the technology, but by and large, according to my study, MNOs seem to have taken a passenger seat to NFV implementations efforts. Many vendors note that MNOs tend to have a very hands off approach towards the PoCs they "participate" in, offering guidance, requirements or in some cases, just lending their name to the effort without "getting their hands dirty".
The Orchestrator’s task in NFV is to integrate with OSS/BSS and to manage the lifecycle of the VNFs and NFVI elements.
It onboards new network services and VNFs and it performs service chaining in the sense that it decides through which VNF, in what order must the traffic go through according to routing rules and templates.
These routing rules are called forwarding graphs. Additionally, the Orchestrator performs policy management between VNFs. Since all VNFs are proprietary, integrating them within a framework that allows their components to interact is a huge undertaking. MANO is probably the part of the specification that is the least mature today and requires the most work.
Since it is the brain of the framework, failure of MANO to
reach a level of maturity enabling consensus between the participants of the
ISG will inevitably relegate NFV to vertical implementations. This could lead
to a network with a collection of vertically virtualized elements, each having their own
MANO, or very high level API abstractions, reducing considerably overall system elasticity and programmability. SDN OpenStack-based models can be used for MANO orchestration of resources (Virtualized Infrastructure Manager) but offer little applicability in the pure orchestration and VNF management field beyond the simplest IP routing tasks.
Operators who are serious about NFV in wireless networks should seriously consider develop their own orchestrator or at the minimum implement strict orchestration guidelines. They could force vendors to adopt a minimum set of VNF abstraction templates for service chaining and policy management.
Labels:
ATT,
ETSI,
NFV,
NTT,
openstack,
orchestration,
SDDC,
SDN,
service enablement,
Telefonica,
virtualized
Friday, May 2, 2014
NFV & SDN part I
You will remember close to 15 years ago when all telecom platforms had to be delivered on hardened SUN Solaris SPARC NEBS certified with full fledged Oracle database to be "telecom grade". Little by little, x86 platforms, MySQL databases and Linux OS have penetrated the ecosystem. It was originally a vendor-driven initiative to reduce their third party cost. The cost reduction was passed on to MNOs who were willing to risk implementing these new platforms. We have seen their implementation grow from greenfield operators in emerging countries, to mature markets first at the periphery of the network, slowing making their way to business-critical infrastructure.
We are seeing today an analogous push to reduce costs further and ban proprietary hardware implementations with NFV. Pushed initially by operators, this initiative sees most network functions first transiting from hardware to software, then being run on virtualized environments on off-the-shelf hardware.
The first companies to embrace NFV have been "startup" like Affirmed Networks. First met with scepticism, the company seems to have been able to design from scratch and deploy commercially a virtualized Evolved Packet Core in only 4 years. It certainly helps that the company was founded to the tune of over 100 millions dollars from big names such as T-Ventures and Vodafone, providing not only founding but presumably the lab capacity at their parent companies to test and fine tune the new technology.
Since then, vendors have started embracing the trend and are moving more or less enthusiastically towards virtualization of their offering. We have seen emerging different approaches, from the simple porting of their software to Xen or VMWare virtualized environments to more achieved openstack / openflow platforms.
I am actively investigating the field and I have to say some vendors' strategies are head scratching. In some cases, moving to a virtualized environment is counter-productive. Some telecom products are highly CPU intensive / specialized and require dedicated resource to attain high performance, scalability in a cost effective package. Deep packet inspection, video processing seem to be good examples. Even those vendors who have virtualized their appliance / solution when pushed will admit that virtualization will come at a performance cost at the state of the technology today.
I have been reading the specs (openflow, openstack) and I have to admit they seem far from the level of detail that we usually see in telco specs to be usable. A lot of abstraction, dedicated to redefining switching, not much in term of call flow, datagram, semantics, service definition, etc...
How the hell does one go about launching a service in a multivendor environment? Well, one doesn't. There is a reason why most NFV initiative are still at the plumbing level, investigating SDN, SDDC, etc... Or single vendor / single service approach. I haven't been convinced yet by anyone's implementation of multi vendor management, let alone "service orchestration". We are witnessing today islands of service virtualization in hybrid environments. We are still far from function virtualization per se.
The challenges are multiple:
- Which is better?: A dedicated platform with low footprint / power requirement that might be expensive and centralized or thousand of virtual instances occupying hundreds of servers that might be cheap (COTS) individually but collectively not very cost or power efficient?
- Will network operator trade Capex for Opex when they need to manage thousand of applications running virtually on IT platforms? How will their personnel trained to troubleshoot problems following the traffic and signalling path will adapt to this fluid non-descript environment?
We are still early in this game, but many vendors are starting to purposefully position themselves in this space to capture the next wave of revenue.
Stay tuned, more to come with a report on the technology, market trends and vendors capabilities in this space later on this year.
Subscribe to:
Posts (Atom)