Tuesday, February 10, 2026

Where Do Network Operators Go From Here? A View Ahead of MWC 2026

With Mobile World Congress just around the corner in Barcelona, the telecom sector finds itself at another inflection point. The headlines are familiar: ongoing layoffs across major operators, C-level reshuffles, persistent ARPU erosion, and debt structures that constrain organic investment. Vendors are already talking up 6G roadmaps while AI dominates conversations—both for aggressive OPEX reduction and tentative new revenue paths. Yet the near-term reality feels more evolutionary than revolutionary.

The recent wave of workforce reductions is not, in my view, primarily an AI story—at least not yet. It reflects the long tail of a structural shift that began over a decade ago: the gradual but relentless transition from proprietary telco platforms to cloud-native architectures. We are finally seeing the full operational benefits of user/control-plane separation, hardware/software disaggregation, widespread network virtualization, and centralized policy orchestration. These changes deliver greater automation, elastic scaling, and dramatically shorter development and validation cycles. The outcome is clear: managing a modern mobile network no longer requires the headcount levels of the previous era. Painful as the adjustment is, it is the inevitable consequence of borrowing proven cloud-native principles. Cost discipline is essential, but it is not a growth strategy. The more pressing question is how operators convert more reliable, elastic, and automated networks into sustainable revenue expansion.

Private Networks: Successes Exist, but They Remain Hard-Won

Private cellular networks continue to polarize opinion. Some portray them as a commercial disappointment; others point to hundreds of documented use cases. The reality sits firmly in between. Genuine deployments delivering positive returns do exist, particularly in verticals with high-value connectivity requirements and tolerance for tailored solutions. Energy (smart grids and remote monitoring), healthcare (indoor coverage in hospitals and clinics), large venues (stadiums and event spaces), mining (autonomous haulage and safety systems), and ports (crane automation and terminal logistics) stand out as segments where demand is tangible and economics can work. The common thread in successful cases is not technology alone but deployment philosophy: cloud-native designs that run on commodity hardware, leverage centralized intelligence, and minimize site-specific customization. When executed this way, private networks become scalable and margin-accretive rather than bespoke projects that drain resources. Operators who treat private 5G as an extension of their public edge and orchestration capabilities—rather than isolated silos—are better positioned to capture repeatable value.

Data: The Next Realistic Monetization Frontier

Beyond connectivity and private networks, operators sit on an underutilized asset: vast quantities of network-derived and network-transported data. Until recently most of this information has been siloed for internal analytics, dashboards, and regulatory reporting. That picture is beginning to change. Monetization remains nascent compared with the advertising-driven models of social platforms, yet the opportunity is material. API gateways that expose selected network and user context (location aggregates, mobility patterns, congestion signals, roaming events) represent only the surface layer. Consider a few practical illustrations:
  • Ride-hailing platforms could benefit from near-real-time insight into clusters of international roamers converging in a city district—an indicator of an upcoming conference, trade show, or major event. Pre-positioning drivers becomes more efficient, improving service levels and reducing wait times.
  • eSIM and travel-focused virtual operators could package value-added bundles—discounted car rentals, hotel reservations, restaurant bookings, or attraction tickets—targeted at detected travelers arriving in high-demand locations.
  • Navigation services (Google Maps, Waze, and equivalents) could gain from telco-sourced, fine-grained congestion and flow data that augments probe-vehicle inputs, especially in areas with sparse device coverage or during atypical events. Privacy and regulatory compliance are non-negotiable hurdles, as are competitive dynamics with hyperscalers and data aggregators. Success will depend on responsible data handling, anonymization at scale, clear value propositions for enterprise partners, and commercial models that avoid commoditization. Operators that can evolve from pure connectivity providers toward curated data intermediaries—leveraging their unique position across physical infrastructure, subscriber scale, and real-time network telemetry—stand to capture incremental revenue without requiring entirely new network builds. As we head to MWC 2026, the conversation will likely revolve around AI acceleration, 6G timelines, and edge monetization. Beneath the buzz, though, the fundamentals remain: disciplined cost management, selective private-network wins, and thoughtful exploration of data opportunities. What are you seeing in your markets? Are private networks crossing the chasm in specific verticals? And where do you place data monetization on the priority list for the next 18–24 months? I welcome your perspectives in the comments.

Thursday, January 29, 2026

Physical AI: How Network Operators Could Leverage Edge Computing for Smarter Robotics

As the telecom landscape evolves, one emerging trend that's catching my eye is Physical AI—the integration of advanced AI into physical devices like robots, enabling them to interact intelligently with the real world. With my background in telco-cloud strategy, I'm particularly intrigued by how network operators could position themselves as key enablers in this space. By providing low-latency edge infrastructure, telcos might unlock new revenue streams while supporting innovative applications that blend robotics, computer vision, and conversational AI.

In a recent analysis, I've been exploring how robots equipped with cameras and speakers could benefit from distributed AI processing at the network edge. This setup allows for real-time scene analysis, object detection, facial recognition, and natural language interactions with humans—all without relying solely on centralized clouds that introduce delays or high costs.

What is Physical AI?

Physical AI refers to AI systems embodied in hardware that perceive, reason, and act in physical environments. Unlike traditional AI that's confined to software, this involves robots or devices that use sensors (like cameras) to understand their surroundings and actuators (like speakers) to respond. The key challenge? Processing massive data streams in real time while maintaining privacy, efficiency, and low latency. This is where telco networks shine, with their distributed edge nodes offering compute power closer to the action.

Edge AI Inference: Powering Perception in Robotics

Operators could facilitate edge-based AI inference, where robots offload complex tasks like scene recognition, object identification, and facial analysis to nearby network edges. For instance, a service robot in a retail store uses its camera to scan the environment: edge inference quickly identifies products on shelves, detects customer faces for personalized greetings (with privacy safeguards), or recognizes obstacles to navigate safely. This sub-10ms processing avoids the pitfalls of cloud round-trips, reducing bandwidth usage and enabling seamless, responsive interactions.

Techniques like federated learning could further enhance this, allowing robots to fine-tune models collaboratively across distributed edges without sharing raw data—ideal for maintaining user privacy in sensitive scenarios.

Generative AI for Natural Language Conversations

Pair that with generative AI models running at the edge for conversational capabilities. Robots with speakers could engage in fluid, context-aware dialogues: a healthcare assistant bot recognizes a patient's face, infers emotional state from scene cues, and generates empathetic responses using natural language processing. Or in manufacturing, a collaborative robot converses with workers in real time—"Hand me the red tool"—while using object recognition to confirm and act.

By offering "AI-as-a-Service" at the edge, operators could provide scalable, usage-based access to these capabilities. Enterprises get high-performance AI without massive capex on private infrastructure, while telcos monetize their pervasive networks.

Real-World Opportunities and Examples

Consider verticals ripe for this:

  • Retail and hospitality: Robots greeting customers by name (via facial rec), recommending items based on scene analysis, and chatting naturally to assist.
  • Healthcare: Companion bots in hospitals using edge inference to monitor patient environments, detect falls, and converse to provide reminders or emotional support.
  • Logistics and manufacturing: Autonomous robots navigating warehouses, identifying inventory via objects/scenes, and collaborating verbally with human teams.
  • Smart cities: Public service bots patrolling areas, recognizing incidents (e.g., litter or crowds), and interacting with citizens through voice.

These use cases could drive B2B partnerships, where operators bundle connectivity with edge AI compute—potentially adding 10-20% to ARPU through premium services.

Considerations for Carriers

To capitalize, carriers might assess their edge footprints for AI readiness, pilot federated models for privacy, and collaborate with robot vendors or AI platforms. Challenges like energy efficiency and standardization remain, but the rewards in a growing Physical AI market make it worth exploring.

Wednesday, January 28, 2026

Distributed AI at the Edge: Opportunities for Telecom Networks in an Evolving AI Landscape

The rapid growth of AI applications is creating new demands on network infrastructure, particularly for low-latency, distributed processing close to end-users and devices. Rather than remaining focused solely on connectivity, telecom networks are increasingly positioning themselves to support distributed AI capabilities—where inference and even lightweight training can occur at the edge. This shift opens interesting possibilities for operators to play a more central role in the broader AI ecosystem. In a recent interview at FYUZ 2025 ( Telecom Infra Project's flagship event in Dublin), I had the opportunity to discuss these dynamics with TelecomTV . The conversation centered on a practical question: How might telco networks evolve from traditional mobile broadband platforms to ones that can meaningfully support distributed AI workloads?

The Emerging Demands on Networks for Distributed AI

AI inference, and in some cases lightweight training at the edge, benefits significantly from response times below 10 milliseconds and access to distributed parallel processing. Centralized cloud architectures face inherent limitations in these scenarios—issues such as data gravity, backhaul congestion, and rising energy requirements often make proximity to the data source or user essential. AI workloads tend to be compute- and power-intensive, and telecom networks already manage substantial energy footprints; integrating AI processing without thoughtful optimization could increase both costs and environmental impact. At the same time, the limitations of static resource allocation become more apparent—networks increasingly need mechanisms for dynamic, policy-aware traffic prioritization, capacity allocation, and workload steering.

How AI-Integrated RAN Can Support Distributed AI Capabilities

One approach carriers are exploring involves integrating AI capabilities directly into the Radio Access Network (AI RAN). This embeds intelligence into the radio layer, enabling distributed inference and lightweight training to take place across the network's existing footprint of base stations, central offices or MSOs, edge nodes, and fiber backhaul. The result is a pervasive mesh of compute resources located close to users and devices.

Distributed inference allows models to be partitioned and processed in parallel at multiple edge points, significantly reducing latency by keeping data local rather than sending it to distant centralized facilities. Where models need fine-tuning based on fresh, real-time data, techniques such as federated learning offer a way to train collaboratively across distributed locations while maintaining data privacy and avoiding the need to aggregate sensitive information centrally.

Internal Opportunities for Carriers

Carriers could apply these distributed AI capabilities to improve their own network operations. For example, predictive maintenance can become more effective when AI models analyze real-time sensor data from base stations to anticipate equipment issues, enabling proactive interventions that help reduce unplanned downtime.

Traffic management stands to benefit as well—distributed inference at the edge can forecast congestion patterns and dynamically adjust routing to preserve service quality during high-demand periods.

Energy optimization is another area of potential gain, with AI learning from usage patterns to make real-time decisions, such as reducing power to underutilized radio resources during quieter hours. In many cases, these internal improvements could deliver operational cost reductions of 20-30% while enhancing overall network reliability, often without requiring large-scale new investments in specialized AI hardware.

Enterprise Potential: The promise of AIaaS

From a business-to-business perspective, distributed AI at the edge could allow operators to offer "AI-as-a-Service" models to enterprises that require low-latency inference but lack the capital or desire to build their own edge infrastructure. Small and medium-sized enterprises across sectors such as manufacturing, retail, logistics, and others often face this constraint. By leveraging the operator's distributed edge, inference tasks can be offloaded on a usage-based basis, making high-performance AI more accessible without heavy upfront expenditure.

Real-world examples help illustrate the potential.

  • In manufacturing, autonomous robotics depend on real-time object detection and path planning; inference performed at the nearest base station can deliver sub-10ms decisions, avoiding production interruptions without the facility needing to deploy its own compute resources.
  • Field technicians in utilities or construction working with augmented reality tools can receive AI-generated diagnostics overlaid on live video feeds—processed at the edge for instant fault identification, such as detecting structural cracks, supporting faster decisions in remote settings.
  • Retail operations can use edge-based smart analytics to interpret camera feeds for customer behavior insights or immediate security alerts, generating millisecond-level responses without on-site servers.
  • In healthcare, wearables transmitting vital signs for anomaly detection (for instance, flagging potential cardiac events) can benefit from low-latency edge processing to deliver timely alerts, particularly valuable in rural or resource-constrained clinics.
  • Cloud gaming environments can also gain from edge-handled AI upscaling of graphics or intelligent NPC behavior, substantially reducing perceived lag for players and smaller studios that lack powerful local hardware.

By structuring these capabilities as on-demand, sliced services, operators could create additional revenue streams while enabling enterprises to adopt AI more broadly without prohibitive capital requirements.

Considerations for Moving Forward

Operators interested in these opportunities might begin by assessing their current latency profiles, edge compute footprint, and level of AI integration. From there, they could prioritize pilot deployments focused on inference before exploring federated training approaches for stronger privacy controls. Partnerships with cloud providers could help develop hybrid models that combine telco edge strengths with broader AI ecosystems. Early monetization might involve introducing "AI-Ready Connectivity" services—low-latency slices, edge GPU access, and intelligent routing designed for enterprises building AI-driven applications.

Telecom networks already offer a distinctive advantage: widespread, low-latency reach to millions of endpoints. Carriers that thoughtfully explore distributed AI capabilities could position themselves as important contributors to the evolving AI infrastructure landscape, potentially unlocking meaningful new value in a growing market.