Showing posts with label MEC. Show all posts
Showing posts with label MEC. Show all posts

Friday, November 3, 2023

Telco edge compute, RAN and AI


In recent years, the telecommunications industry has witnessed a profound transformation, driven by the rapid penetration of cloud technologies. Cloud Native Functions have become common in the packet core, OSS BSS, transport and are making their way in the access domain, both fixed and mobile. CNFs mean virtual infrastructure management and data centers have become an important part of network capex strategies. 

While edge computing in telecoms, with the emergence of MEC (Multi Access Edge Computing), has been mostly confined to telco network functions (UPF, RAN CU/DU...) network operators should now explore the opportunities for retail and wholesale of edge computing services. My workshop examines in details the strategies, technologies and challenges associated with this opportunity.

Traditional centralized cloud infrastructure is being augmented with edge computing, effectively bringing computation and data storage closer to the point of data generation and consumption.

What are the benefits of edge computing for telecom networks?

  • Low Latency: One of the key advantages of edge computing is its ability to minimize latency. This is of paramount importance in telecoms, especially in applications like autonomous vehicles, autonomous robots / manufacturing, and remote-controlled machinery.
  • Bandwidth Efficiency: Edge computing reduces the need for transmitting massive volumes of data over long distances, which can strain network bandwidth. Instead, data processing and storage take place at the edge, significantly reducing the burden on core networks. This is particularly relevant for machine vision, video processing and AI use cases.
  • Enhanced Security: Edge computing offers improved security by allowing sensitive data to be processed locally. This minimizes the exposure of critical information to potential threats in the cloud. Additionally, privacy, data sovereignty and residency concerns can be efficiently addressed by local storage / computing.
  • Scalability: Edge computing enables telecom operators to scale resources as needed, making it easier to manage fluctuating workloads effectively.
  • Simpler, cheaper devices: Edge computing allows devices to be cheaper and simpler while retaining sophisticated functionalities, as storage, processing can be offloaded to a nearby edge compute facility.

Current Trends in Edge Computing for Telecoms

The adoption of edge computing in telecoms is rapidly evolving, with several trends driving the industry forward:

  • 5G and private networks Integration: The deployment of 5G networks is closely intertwined with edge computing. 5G's high data transfer rates and low latency requirements demand edge infrastructure to deliver on its promises effectively. The cloud RAN and service based architecture packet core functions drive demand in edge computing for the colocation of UPF and CU/DU functions, particularly for private networks.
  • Network Slicing: Network operators are increasingly using network slicing to create virtualized network segments, allowing them to allocate resources and customize services for different applications and use cases.
  • Ecosystem Partnerships: Telcos are forging partnerships with cloud providers, hardware manufacturers, and application developers to explore retail and wholesale edge compute services.

Future Prospects

The future of edge computing in telecoms offers several exciting possibilities:
  • Edge-AI Synergy: As artificial intelligence becomes more pervasive, edge computing will play a pivotal role in real-time AI processing, enhancing applications such as facial recognition, autonomous drones, and predictive maintenance. Additionally, AI/ML is emerging as a key value proposition in a number of telco CNFs, particularly in the access domain, where RAN intelligence is key to optimize spectrum and energy usage, while tailoring user experience.
  • Industry-Specific Edge Solutions: Different industries will customize edge computing solutions to cater to their unique requirements. This could result in the development of specialized edge solutions for healthcare, manufacturing, transportation, and more.
  • Edge-as-a-Service: Telecom operators are likely to offer edge services as a part of their portfolio, allowing enterprises to deploy and manage edge resources with ease.
  • Regulatory Challenges: As edge computing becomes more integral to telecoms, regulatory challenges may arise, particularly regarding data privacy, security, and jurisdictional concerns.

New revenues streams can also be captured with the deployment of edge computing.

  • For consumers, it is likely that the lowest hanging fruit in the short term is in gaming. While hyperscalers and gaming companies have launched their own cloud gaming services, their success has been limited due to the poor online experience. The most successful game franchises are Massive Multiplayer Online. They pitch dozens of players against each other and require a very controlled latency between all players for a fair and enjoyable gameplay. Only operators can provide controlled latency if they deploy gaming servers at the edge. Without a full blown gaming service, providing game caching at the edge can drastically reduce the download time for games, updates and patches, which increases dramatically player's service satisfaction.
  • For enterprise users, edge computing has dozens of use cases that can be implemented today that are proven to provide superior experience compared to the cloud. These services range from high performance cloud storage, to remote desktop, video surveillance and recognition.
  • Beyond operators-owned services, the largest opportunity is certainly the enablement of edge as a service (EaaS), allowing cloud developers to use edge resources as specific cloud availability zones.
Edge computing is rapidly maturing in the telecom industry by enabling low-latency, high-performance, and secure services that meet the demands of new use cases. As we move forward, the integration of edge computing with 5G and the continuous development of innovative applications will shape the industry's future. Telecom operators that invest in edge computing infrastructure and capabilities will be well-positioned to capitalize on the opportunities presented by this transformative technology. 


Thursday, July 27, 2023

The 5G letdown


 I have often written about what I think are the necessary steps for network operators to grow and prosper in our digital world. Covid, the changes in work modes, the hiring gluttony of the GAFAs, the geopolitical situation, between the banning of untrusted vendors and the consequences of a European conflicts have created quite a different situation today. 

Twitter or X reorganization and mass layoffs signaled the tech industry that it was ok to look for productivity and profitability and that over-hiring without a clear mission or reorienting companies entire strategies on far fetched, unproven concepts (web3, metaverse, crypto...) had very costly consequences. Fast forward to this summer of 2023, most GAFAs have been refocusing their efforts into their core business, with less intent on changing the telecoms landscape. This lull has allowed many network operators to post healthy growth and profits, while simultaneously laying off / fast tracking early retirement for some of their least adequately skilled personnel.

I think that a lot of these positive telco results are conjunctural, rather than structural and one crucial issue remains for operators (and their suppliers). 5G is a bust. So far.

The consumer market is not really looking for more speed at this time. The main selling proposition of 5G seems to have a 5G logo on your phone. I have 4G and 5G phones and I can't really tell the difference from a network user experience standpoint. 

No real 5G use case has emerged to justify the hype, and all in all, consumers are more likely to fork out 1000s of $ for a new device, rather than an additional 10 per month for a "better" connectivity. Especially since, us, telco literati know that 5G Non Stand Alone, is not really 5G, more like a 4G +. Until 5G Stand Alone emerges dominantly, the promises of 5G wont be fulfilled.  

The promise and business case of 5G was supposed to revolve around new connectivity services. Until now, essentially, whether you have a smartphone, a tablet, a laptop, a connected car, an industrial robot and whether you are a working from home or road warrior professional, all connectivity products are really the same. The only variable are the price and coverage.

5G was supposed to offer connectivity products that could be adapted to different device types, verticals and industries, geographies, vehicles, drones,... The 5G business case hinges on enterprises, verticals and government adoption and willingness to pay for enhanced connectivity services. By and large, this hasn't happened yet. There are several reasons for this, the main one being that to enable these, a network overall is necessary.

First, a service-based architecture is necessary, comprising 5G Stand Alone, Telco cloud and Multi-Access Edge Computing (MEC), Service Management and Orchestration are necessary. Then, cloud-native RAN, either cloud RAN or Open RAN (but particularly the RAN Intelligent Controllers - RICs)  would be useful. All this "plumbing" to enable end to end slicing, which in turn will create the capabilities to serve distinct and configurable connectivity products.

But that's not all... A second issue is that although it is accepted wisdom that slicing will create connectivity products that enterprises and governments will be ready to pay for, there is little evidence of it today. One of the key differentiators of the "real" 5G and slicing will be deterministic speed and latency. While most actors of the market are ready to recognize that in principle a controllable latency would be valuable, no one really knows the incremental value of going from variable best effort to deterministic 100, 10 or 5 millisecond latency.

The last hurdle, is the realization by network operators that Mercedes, Wallmart, 3M, Airbus... have a better understanding of their connectivity needs than any carrier and that they have skilled people able to design networks and connectivity services in WAN, cloud, private and cellular networks. All they need is access and a platform with APIs. A means to discover, reserve, design connectivity services on the operator's network will be necessary and the successful operators will understand that their network skillset might be useful for consumers and small / medium enterprises, but less so for large verticals, government and companies.

My Telco Cloud + Edge Computing and Open RAN workshops examine the technologies, use cases, implementations, strategies, operators and vendors who underlie the key growth factors for telco operators' and vendors' success in the "real" 5G.



Tuesday, October 6, 2020

Telco grade or Cloud grade?

 

For as long as I can remember, working in Telco, there has been the assumption that Telco networks were special. 

They are regulated, they are critical infrastructure, they require a level of engineering and control that goes beyond traditional IT. This has often been the reason why some technologies and vendors haven't been that successful in that space, despite having stellar records in other equally (more?) demanding industries such as energy, finance, space, defence...

Being Telco grade, when I cut my teeth as a telco supplier, meant high availability (5x9's), scalability and performance (100's of millions of simultaneous streams, connections, calls, ...), security, achieved with multiple vertical and horizontal redundancies, and deployed of highly specialized appliances.

Along comes the Cloud, with its fancy economics, underpinned by separation of hardware and software, virtualization, then decomposition, then disaggregation of software elements into microservices. Add to it some control / user plane separation, centralized control, management, configuration, deployment, roll out, scalability rules... a little decentralized telemetry and systematic automation through radical opening of API between layers... That's the recipe for Cloud grade networks.

At the beginning, the Telco-natives looked at these upstarters with a little disdain, "that's good for web traffic. If a request fail, you just retry, it will never be enough for Telco grade...". 

Then with some interest "maybe we can use that Cloud stuff for low networking, low compute stuff like databases, inventory management... It's not going to enable real telco grade stuff, but maybe there is some savings".

Then, more seriously "we need to harness the benefits of the cloud for ourselves. We need to build a Telco cloud". This is about the time the seminal white paper on Telco virtualization launched NFV and a flurry of activities to take IT designed cloud fabric (read Openstack) and make it Telco grade (read pay traditional Telco vendors who have never developed or deployed a cloud fabric at scale and make proprietary branches of an open source project hardened with memorable features such as DPDK SR-IOV, CPU pinning so that the porting of their proprietary software on hypervisor does not die under the performance SLA...). 

Fast forward a few years, orchestration and automation become the latest targets, and a zoo of competing proprietary-turned-open-source projects start to emerge, whereas large communities of traditional telco vendors are invited to contribute charitably time and code on behalf of Telcos for projects that they have no interest in developing or selling.

In the meantime, Cloud grade has grown in coverage, capacity, ecosystem, revenues, use cases, flexibility, availability, scalability... by almost any metrics you can imagine, while reducing costs and prices. Additionally, we are seeing new "cloud native" vendors emerge with Telco products that are very close to the Telco grade ideal in terms of performance, availability, scalability, at a fraction of the cost of the Telco-natives. Telco functions that the Telco-native swore could never find their way to the cloud are being deployed there, for security, connectivity, core networks, even RAN...

I think it is about time that the Telco-natives accept and embrace that it is probably faster, more cost efficient and more scalable to take a Cloud-native function and make it Telco-grade than trying to take the whole legacy Telco network and trying to make it Cloud grade. It doesn't mean to throw away all the legacy investment, but at least to consider sunsetting strategy and cap and grow. Of course, it means also being comfortable with the fact that the current dependencies of traditional Telco vendors might have to be traded for dependencies on hyperscalers, who might, or not become competitors down the line. Not engaging with them, si not going to change that fact. 5G stand alone, Open RAN or MEC are probably good places to start, because they are greenfield. This is where the smart money is these days, as entry strategy into Telco world goes...



Wednesday, November 8, 2017

Telefonica´s Internet para todos

Presented today at , on the need for the industry to evolve to connect the unconnected, what is doing about it, from applying to HD population density modeling with , and , to assembling innovative networks and commercial and operating models with LatAm partners,





Monday, June 13, 2016

Time to get out of consumer market for MNOs?

I was delivering a workshop on SDN / NFV in wireless, last week, at a major pan-european tier one operator group and the questions of encryption and net neutrality were put again on the table.

How much clever, elastic, agile software-defined traffic management can we really expect when "best effort" dictates the extent of traffic management and encryption renders many efforts to just understand traffic composition and velocity difficult?

There is no easy answer. I have spoken at length on both subjects (here and here, for instance) and the challenges have not changed much. Encryption is still a large part of traffic and although it is not growing as fast as initially planned after Google, Netflix, Snapchat or Facebook's announcements it is still a dominant part of data traffic. Many start to think that HTTPS / SSL is a first world solution, as many small and medium scale content or service providers that live on a freemium or ad-sponsored models can't afford the additional cost and latency unless they are forced to. Some think that encryption levels will hover around 50-60% of the total until mass adoption of HTTP/2 which could take 5+ years. We have seen, with T-Mobile's binge on  a first service launch that actively manages traffic, even encrypted to an agreed upon quality level. The net neutrality activists cried fool at the launch of the service, but quickly retreated when they saw the popularity and the first tangible signs of collaboration between content providers, aggregators and operators for customers' benefit.

As mentioned in the past, the problem is not technical, moral or academic. Encryption and net neutrality are just symptoms of an evolving value chain where the players are attempting to position themselves for dominance. The solution with be commercial and will involve collaboration in the form of content metadata exchange, to monitor, control and manage traffic. Mobile Edge Computing can be a good enabler in this. Mobile advertising, which is still missing over 20b$ in investment in the US alone when compared to other media and time spent / eyeball engagement will likely be part of the equation as well.

...but what happens in the meantime, until the value chain realigns? We have seen consumer postpaid ARPU declining in most mature markets for the last few years, while we seen engagement and usage of so-called OTT services explode. Many operators continue to keep their head in the sand and thinking of "business as usual" while timidly investigating new potential "revenue streams".

I think that the time has come for many to wake up and take hard decisions. In many cases, operators are not equipped organizationally or culturally for the transition that is necessary to flourish in a fluid environment where consumer flock to services that are free, freemium, or ad sponsored. What operators know best, subscription services see their price under intense pressure because OTTs are looking at usage and penetration at global levels, rather than per country. For these operators who understand the situation and are changing their ways, the road is still long and with many obstacles, particularly on the regulatory front, where they are not playing by the same rules as their OTT competition.

I suggest here that for many operators, it is time to get out. You had a good run, made lots of money on consumer services through 2G, 3G and early 4G, the next dollars or euros are going to be tremendously more expensive to get than the earlier.
At this point, I think there are emerging and underdeveloped verticals (such as enterprise and IoT) that are easier to penetrate (less regulatory barriers, more need for managed network capabilities and at least in the case of enterprise, more investment possibilities).
I think that at this stage, any operator who derives most of its revenue from consumer services should assume that these will likely dwindle to nothing unless drastic operational, organizational and cultural changes occur.
Some operator see the writing on the wall and have started the effort. There is no guarantee that it will work, but certainly having a software defined, virtualized elastic network will help if they are betting the farm on service agility. Others are looking at new technologies, open source and standards as they have done in the past. Aligning little boxes from industry vendors in neat powerpoint roadmap presentations, hiring a head of network transformation or virtualization... for them, the reality, I am afraid will come hard and fast. You don't invest in technologies to build services. You build services first and then look at whether you need more or new technologies to enable them.

Thursday, May 5, 2016

MEC: The 7B$ opportunity

Extracted from Mobile Edge Computing 2016.
Table of contents



Defining an addressable market for an emerging product or technology is always an interesting challenge. On one hand, you have to evaluate the problems the technology solves and their value to the market, and on the other hand, appreciate the possible cost structure and psychological price expectations from the potential buyer / users.

This warrants a top down and bottoms up approach to look at how the technology can contribute or substitute some current radio and core networks spending, together with a cost based review of the potential physical and virtual infrastructure. [...]

The cost analysis is comparatively easy, as it relies on well understood current cost structure for physical hardware and virtual functions. The assumptions surrounding the costs of the hardware has been reviewed with main x86 based hardware vendors. The VNFs pricing relies on discussions with large and emerging telecom equipment vendors for standard VNFs such as EPC, IMS, encoding, load balancers, DPI… price structure. Traditional telco professional services, maintenance and support costs are apportioned and included in the calculations.

The overall assumption is that MEC will become part of the fabric of 5G networks and that MEC equipment will cover up to 20% of a network (coverage or population) when fully deployed.
The report features total addressable market, cumulative and incremental for MEC equipment vendors and integrator, broken down by CAPEX / OPEX, consumer, enterprises and IoT services.
It then provides a review of operators opportunities and revenue model for each segment.


Monday, April 25, 2016

Mobile Edge Computing 2016 is released!



5G networks will bring extreme data speed and ultra low latency to enable Internet of Things, autonomous vehicles, augmented, mixed and virtual reality and countless new services.

Mobile Edge Computing is an important technology that will enable and accelerate key use cases while creating a collaborative framework for content providers, content delivery networks and network operators. 

Learn how mobile operators, CDNs, OTTs and vendors are redefining cellular access and services.

Mobile Edge Computing is a new ETSI standard that uses latest virtualization, small cell, SDN and NFV principles to push network functions, services and content all the way to the edge of the mobile network. 


This 70 pages report reviews in detail what Mobile Edge Computing is, who the main actors are and how this potential multi billion dollar technology can change how OTTs, operators, enterprises and machines can enable innovative and enhanced services.

Providing an in-depth analysis of the technology, the architecture, the vendors's strategies and 17 use cases, this first industry report outlines the technology potential and addressable market from a vendor, service provider and operator's perspective.

Table of contents, executive summary can be downloaded here.

Monday, April 4, 2016

MEC 2016 Executive Summary

2016 sees a sea change in the fabric of the mobile value chain. Google is reporting that mobile search revenue now exceed desktop, whereas 47% of Facebook members are now exclusively on mobile, which generates 78% of the company’s revenue. It has taken time, but most OTT services that were initially geared towards the internet are rapidly transitioning towards mobile.

The impact is still to be felt across the value chain.

OTT providers have a fundamentally different view of services and value different things than mobile network operators. While mobile networks have been built on the premises of coverage, reliability and ubiquitous access to metered network-based services, OTT rely on free, freemium, ad-sponsored or subscription based services where fast access and speed are paramount. Increase in latency impacts page load, search time and can cost OTTs billions in revenue.

The reconciliation of these views and the emergence of a new coherent business model will be painful but necessary and will lead to new network architectures.

Traditional mobile networks were originally designed to deliver content and services that were hosted on the network itself. The first mobile data applications (WAP, multimedia messaging…) were deployed in the core network, as a means to be both as close as possible to the user but also centralized to avoid replication and synchronization issues.
3G and 4G Networks still bear the design associated with this antiquated distribution model. As technology and user behaviours have evolved, a large majority of content and services accessed on cellular networks today originate outside the mobile network. Although content is now stored and accessed from clouds, caches CDNs and the internet, a mobile user still has to go through the internet, the core network, the backhaul and the radio network to get to it. Each of these steps sees a substantial decrease in throughput capacity, from 100's of Gbps down to Mbps or less. Additionally, each hop adds latency to the process. This is why networks continue to invest in increasing throughput and capacity. Streaming a large video or downloading a large file from a cloud or the internet is a little bit like trying to suck ice cream with a 3-foot bending straw.

Throughput and capacity seem to be certainly tremendously growing with the promises of 5G networks, but latency remains an issue. Reducing latency requires reducing distance between the consumer and where content and services are served. CDNs and commercial specialized caches (Google, Netflix…) have been helping reduce latency in fixed networks, by caching content as close as possible to where it is consumed with the propagation and synchronization of content across Points of Presence (PoPs). Mobile networks’ equivalent of PoPs are the eNodeB, RNC or cell aggregation points. These network elements, part of the Radio Access Network (RAN) are highly proprietary purpose-built platforms to route and manage mobile radio traffic. Topologically, they are the closest elements mobile users interact with when they are accessing mobile content. Positioning content and services there, right at the edge of the network would certainly substantially reduce latency.
For the first time, there is an opportunity for network operators to offer OTTs what they will value most: ultra-low latency, which will translate into a premium user experience and increased revenue. This will come at a cost, as physical and virtual real estate at the edge of the network will be scarce. Net neutrality will not work at the scale of an eNodeB, as commercial law will dictate the few applications and services providers that will be able to pre-position their content.

Mobile Edge Computing provides the ability to deploy commercial-off-the-shelf (COTS) IT systems right at the edge of the cellular network, enabling ultra-low latency, geo-targeted delivery of innovative content and services. More importantly, MEC is designed to create a unique competitive advantage for network operators derived from their best assets, the network and the customers’ behaviour. This report reviews the opportunity and timeframe associated with the emergence of this nascent technology and its potential impact on mobile networks and the mobile value chain.

Friday, March 18, 2016

For or against Adaptive Bit Rate? part V: centralized control

I have seen over the last few weeks much speculations and claims with T-Mobile's Binge On service launch and these have accelerated with yesterday's announcement of Google play and YouTube joining the service. As usual many are getting on their net neutrality battle horse using fraught assumptions and misconceptions to reject the initiative.

I have written at length about what ABR is and what are its pros and cons, you can find some extracts in the links at the end of this post. I'll try here to share my views and expose some facts to enable a more pragmatic approach.

I think we can safely assume that every actor in the mobile video delivery chain wants to enable the best user experience for users, whenever possible.
As I have written in the past, in the current state of affair, adaptive bit rate is often times corrupted in order to seize as much network bandwidth as possible, which results in devices and service providers aggressively competing for bits and bytes.
Content providers assume that highest quality of content (1080p HD video for instance) equals maximum experience for subscriber and therefore try and capture as much network resource as possible to deliver it. Browser / apps / phone manufacturers also assume that more speed equals better user experience, therefore try to commandeer as much capacity as possible. The flaw here is the assumption that the optimum is the product of many maxima self regulated by an equal and fair apportioning of resources. This shows a complete ignorance of how networks are designed, how they operate and how traffic flows through these networks.

An OTT cannot know why a user’s session downstream speed is degrading, it can just report it. Knowing why is important because it enables to make better decisions in term of the possible corrective actions that need to be undertaken to preserve the user’s experience. For instance, a reduction of bandwidth for a particular user can be the result of handover (4G to 3G or cells with different capacity), or because of congestion in a given cell or due to the distance between the phone and the antenna or whether a user enters a building, an elevator, or whether she is reaching her data cap and being throttled etc.… Reasons can be multiple and for each of them, a corrective action can have a positive or a negative effect on the user’s experience. For instance, in a video streaming scenario, you can have a group of people in a given cell streaming Netflix and others streaming YouTube. Naturally, the video streamed is in progressive download adaptive bit rate format, which means that the stream will try to increase to the highest available download bit rate to deliver the highest video definition possible. All sessions will theoretically increase the delivered definition up to the highest available or the highest delivery bit rate available, whichever comes first. In a network with much capacity, everyone ramps up to 1080p and everyone has a great user experience.

More often than not, though, that particular cell cannot accommodate everyone’s stream at the highest definition at the same time. Adaptive bit rate is supposed to help there again by stepping down definition until it fits within available delivery bit rate. It unfortunately can’t work like that when we are looking at multiple sessions from multiple OTTs. Specifically, as soon as one player starts reducing its definition to meet lower bit rate delivery, that freed-up bandwidth is grabbed by other players, which can now look at increasing even more their definition. There is no incentive for content provider to reduce bandwidth fast to follow network condition, because they can become starved by their competition in the same cell.

The solution here is simple, the delivery of ABR video content has to be managed and coordinated between all providers. The only way and place to provide this coordination is in the mobile network, as close to the radio resource as possible. [...]

This and more in my upcoming Mobile Edge Computing report.


Part I:What is ABR?
Part II: For ABR
Part III:Why isn't ABR more succesful
Part IV: alernatives

Tuesday, March 1, 2016

Mobile World Congress 16 hype curve

Mobile World Congress 2016 was an interesting show in many aspects. Here are some of my views on most and least hyped subjects, including mobile video, NFV, SDN, IoT, M2M, augmented and virtual reality, TCP optimization, VoLTE and others

First, let start with mobile video, my pet subject, as some of you might know. 2016 sees half of Facebook users to be exclusively mobile, generating over 3/4 of the company's revenue while half of YouTube views are on mobile devices and nearly half of Netflix under 34 members watch from a mobile device. There is mobile and mobile, though and a good 2/3 of these views occur on wifi. Still, internet video service providers see themselves becoming mobile companies faster than they thought. The result is increased pressure on mobile networks to provide fast, reliable video services, as 2k, 4K, 360 degrees video, augmented and virtual reality are next on the list of services to appear. This continues to create distortions to the value chain as encryption, ad blocking, privacy, security, net neutrality, traffic pacing and prioritization are being used as weapons of slow attrition by traditional and new content and service providers. On the network operators' side, many have deserted the video monetization battlefield. T-Mobile's Binge On seems to give MNOs pause for reflection on alternative models for video services cooperation. TCP optimization has been running hot as a technology for the last 18 months and has seen Teclo Networks acquired by Sandvine on the heels of this year's congress.

Certainly, I have felt that we have seen a change of pace and tone in many announcements, with NFV hyperbolic claims subsiding somewhat compared to last year. Specifically, we have seen several vendors live deployments, but mostly revolving around launches of VoLTE, virtualized EPC for MVNOs, enterprise or verticals and ubiquitous virtualized CPE but still little in term of multi-vendor generic traffic NFV deployments at scale. Talking about VoLTE, I now have several anecdotal evidence from Europe, Asia and North America that the services commercially launched are well below expectation in term of quality an performance against circuit switched voice.
The lack of maturity of standards for Orchestration is certainly the chief culprit here, hindering progress for open multi vendor service automation. 
Proof can be found in the flurry of vendors "ecosystems". If everyone works so hard to be in one and each have their own, it underlines the market fragmentation rather than reduces it. 
An interesting announcement showed Telefonica, BT, Korea Telecom, Telekom Austria, SK, Sprint,  and several vendors taking a sheet from OPNFV's playbook and creating probably one of the first open-source project within ETSI, aimed at delivering a MANO collaborative project,.
I have been advocating for such a project for more than 18 months, so I certainly welcome the initiative, even if ETSI might not feel like the most natural place for an open source project. 

Overall, NFV feels more mature, but still very much disconnected from reality. A solution looking for problems to solve, with little in term of new services creation. If all the hoopla leads to cloud-based VPNs, VoLTE and cheaper packet core infrastructure, the business case remains fragile.

The SDN announcements were somewhat muted, but showing good progress in SD-WAN, and SD data center architecture with the recognition, at last, that specialized switches will likely still be necessary in the short to medium term if we want high performance software defined fabric - even if it impacts agility. The compromises are sign of market maturing, not a failure to deliver on the vendors part in my opinion.

IoT, M2M were still ubiquitous and vague, depicted alternatively as next big thing or already here. The market fragmentation in term of standards, technology, use cases and understanding leads to baseless fantasist claims from many vendors (and operators) on the future of wearable, autonomous transports, connected objects... with little in term of evidence of a coherent ecosystem formation. It is likely that a dominant player will emerge and provide a top-down approach, but the business case seems to hinge on killer-apps that hint a next generation networks to be fulfilled.

5G was on many vendors' lips as well, even if it seems to consistently mean different things to different people, including MIMO, beam forming, virtualized RAN... What was clear, from my perspective was that operators were ready at last to address latency (as opposed or in complement of bandwidth) as a key resource and attribute to discriminate services and associated network slices.

Big Data slid right down the hype curve this year, with very little in term of  announcement or even reference in vendors product launches or deployments. It now seems granted that any piece of network equipment, physical or virtual must generate rivulets that stream to rivers and data lakes, to be avidly aggregated, correlated by machine learning algorithms to provide actionable insights in the form of analytics and alerts. Vendors show progress in reporting, but true multi vendors holistic analytics remains extremely difficult, due to the fragmentation of vendors data attributes and the necessity to have both data scientists and subject matter experts working together to discriminate actionable insights from false positives.

On the services side, augmented and virtual reality were revving up to the next hype phase with a multitude of attendees walking blindly with googles and smartphones stuck to their face... not the smartest look and unlikely to pass novelty stage until integrated in less obtrusive displays. On the AR front, convincing use cases start to emerge, such as furniture shopping (whereas you can see and position furniture in your home by superimposing them from a catalogue app), that are pragmatic and useful without being too cumbersome. Anyone who had to shop for furniture and send it back because it did not fit or the color wasn't really the same as the room will understand. 
Ad blocking certainly became a subject of increased interest, as operators and service providers are still struggling for dominance. As encrypted data traffic increases, operators start to explore ways to provide services that users see as valuable and if they hurt some of the OTTs business models, it is certainly an additional bargaining chip. The melding and reforming of the mobile value chain continues and accelerates with increased competition, collaboration and coopetition as MNOs and OTTs are finding a settling position. I have recently ranted about what's wrong with the mobile value chain, so I will spare you here.

At last, my personal interest project this year revolves around Mobile Edge Computing. I have started production on a report on the subject. I think the technology has potential unlock many new services in mobile networks and I can't wait to tell you more about it. Stay tuned for more!

Thursday, July 9, 2015

Announcing SDN / NFV in wireless 2015

On the heels of my presentation at the NFV world congress in San Diego this spring, my presentation and panels at LTE world summit on network visualization and my anticipated participation at SDN & OpenFlow world Summit in the fall, I am happy to announce production for "SDN / NFV in wireless networks 2015".

This report, to be released in September, will feature my review of the progress of SDN and NFV as technologies transitioning from PoC to commercial trials and limited deployments in wireless networks.



The report provides a step by step strategy for introducing SDN and NFV in your product and services development.


  • Drivers for SDN and NFV in telecom networks 
  • Public, private, hybrid, specialized clouds 
  • Review of SDN and NFV standards and open source initiatives
  • SDN 
    • Service chaining
    • Apache CloudStack, Microsoft Cloud OS, Red Hat, Citrix CloudPlatform, OpenStack,  VMWare vCloud, 
    • SDN controllers (OpenDaylight, ONOS) 
    • SDN protocols (OpenFlow, NETCONF, ForCES, YANG...)
  • NFV 
    • ETSI ISG NFV 
    • OPNFV 
    • OpenMANO 
    • NFVRG 
    • MEF LSO 
    • Hypervisors: VMWare vs. KVM, vs Containers
  • How does it all fit together? 
  • Core and RAN networks NFV roadmap
  • Operators strategy and deployments review: AT&T, China Unicom, Deutsche Telekom, EE, Telecom Italy, Telefonica, Verizon...
  • Vendors strategy and roadmap review: Affirmed networks, ALU, Cisco, Ericsson, F5, HP, Huawei, Intel, Juniper, Oracle, Red Hat... 
Can't wait for the report? Want more in-depth and personalized training? A 5 hours workshop and strategy session is available now to answer your specific questions and help you chart your product and services roadmap, while understanding your competitors' strategy and progress.