Tuesday, February 10, 2026

Where Do Network Operators Go From Here? A View Ahead of MWC 2026

With Mobile World Congress just around the corner in Barcelona, the telecom sector finds itself at another inflection point. The headlines are familiar: ongoing layoffs across major operators, C-level reshuffles, persistent ARPU erosion, and debt structures that constrain organic investment. Vendors are already talking up 6G roadmaps while AI dominates conversations—both for aggressive OPEX reduction and tentative new revenue paths. Yet the near-term reality feels more evolutionary than revolutionary.

The recent wave of workforce reductions is not, in my view, primarily an AI story—at least not yet. It reflects the long tail of a structural shift that began over a decade ago: the gradual but relentless transition from proprietary telco platforms to cloud-native architectures. We are finally seeing the full operational benefits of user/control-plane separation, hardware/software disaggregation, widespread network virtualization, and centralized policy orchestration. These changes deliver greater automation, elastic scaling, and dramatically shorter development and validation cycles. The outcome is clear: managing a modern mobile network no longer requires the headcount levels of the previous era. Painful as the adjustment is, it is the inevitable consequence of borrowing proven cloud-native principles. Cost discipline is essential, but it is not a growth strategy. The more pressing question is how operators convert more reliable, elastic, and automated networks into sustainable revenue expansion.

Private Networks: Successes Exist, but They Remain Hard-Won

Private cellular networks continue to polarize opinion. Some portray them as a commercial disappointment; others point to hundreds of documented use cases. The reality sits firmly in between. Genuine deployments delivering positive returns do exist, particularly in verticals with high-value connectivity requirements and tolerance for tailored solutions. Energy (smart grids and remote monitoring), healthcare (indoor coverage in hospitals and clinics), large venues (stadiums and event spaces), mining (autonomous haulage and safety systems), and ports (crane automation and terminal logistics) stand out as segments where demand is tangible and economics can work. The common thread in successful cases is not technology alone but deployment philosophy: cloud-native designs that run on commodity hardware, leverage centralized intelligence, and minimize site-specific customization. When executed this way, private networks become scalable and margin-accretive rather than bespoke projects that drain resources. Operators who treat private 5G as an extension of their public edge and orchestration capabilities—rather than isolated silos—are better positioned to capture repeatable value.

Data: The Next Realistic Monetization Frontier

Beyond connectivity and private networks, operators sit on an underutilized asset: vast quantities of network-derived and network-transported data. Until recently most of this information has been siloed for internal analytics, dashboards, and regulatory reporting. That picture is beginning to change. Monetization remains nascent compared with the advertising-driven models of social platforms, yet the opportunity is material. API gateways that expose selected network and user context (location aggregates, mobility patterns, congestion signals, roaming events) represent only the surface layer. Consider a few practical illustrations:
  • Ride-hailing platforms could benefit from near-real-time insight into clusters of international roamers converging in a city district—an indicator of an upcoming conference, trade show, or major event. Pre-positioning drivers becomes more efficient, improving service levels and reducing wait times.
  • eSIM and travel-focused virtual operators could package value-added bundles—discounted car rentals, hotel reservations, restaurant bookings, or attraction tickets—targeted at detected travelers arriving in high-demand locations.
  • Navigation services (Google Maps, Waze, and equivalents) could gain from telco-sourced, fine-grained congestion and flow data that augments probe-vehicle inputs, especially in areas with sparse device coverage or during atypical events. Privacy and regulatory compliance are non-negotiable hurdles, as are competitive dynamics with hyperscalers and data aggregators. Success will depend on responsible data handling, anonymization at scale, clear value propositions for enterprise partners, and commercial models that avoid commoditization. Operators that can evolve from pure connectivity providers toward curated data intermediaries—leveraging their unique position across physical infrastructure, subscriber scale, and real-time network telemetry—stand to capture incremental revenue without requiring entirely new network builds. As we head to MWC 2026, the conversation will likely revolve around AI acceleration, 6G timelines, and edge monetization. Beneath the buzz, though, the fundamentals remain: disciplined cost management, selective private-network wins, and thoughtful exploration of data opportunities. What are you seeing in your markets? Are private networks crossing the chasm in specific verticals? And where do you place data monetization on the priority list for the next 18–24 months? I welcome your perspectives in the comments.

Thursday, January 29, 2026

Physical AI: How Network Operators Could Leverage Edge Computing for Smarter Robotics

As the telecom landscape evolves, one emerging trend that's catching my eye is Physical AI—the integration of advanced AI into physical devices like robots, enabling them to interact intelligently with the real world. With my background in telco-cloud strategy, I'm particularly intrigued by how network operators could position themselves as key enablers in this space. By providing low-latency edge infrastructure, telcos might unlock new revenue streams while supporting innovative applications that blend robotics, computer vision, and conversational AI.

In a recent analysis, I've been exploring how robots equipped with cameras and speakers could benefit from distributed AI processing at the network edge. This setup allows for real-time scene analysis, object detection, facial recognition, and natural language interactions with humans—all without relying solely on centralized clouds that introduce delays or high costs.

What is Physical AI?

Physical AI refers to AI systems embodied in hardware that perceive, reason, and act in physical environments. Unlike traditional AI that's confined to software, this involves robots or devices that use sensors (like cameras) to understand their surroundings and actuators (like speakers) to respond. The key challenge? Processing massive data streams in real time while maintaining privacy, efficiency, and low latency. This is where telco networks shine, with their distributed edge nodes offering compute power closer to the action.

Edge AI Inference: Powering Perception in Robotics

Operators could facilitate edge-based AI inference, where robots offload complex tasks like scene recognition, object identification, and facial analysis to nearby network edges. For instance, a service robot in a retail store uses its camera to scan the environment: edge inference quickly identifies products on shelves, detects customer faces for personalized greetings (with privacy safeguards), or recognizes obstacles to navigate safely. This sub-10ms processing avoids the pitfalls of cloud round-trips, reducing bandwidth usage and enabling seamless, responsive interactions.

Techniques like federated learning could further enhance this, allowing robots to fine-tune models collaboratively across distributed edges without sharing raw data—ideal for maintaining user privacy in sensitive scenarios.

Generative AI for Natural Language Conversations

Pair that with generative AI models running at the edge for conversational capabilities. Robots with speakers could engage in fluid, context-aware dialogues: a healthcare assistant bot recognizes a patient's face, infers emotional state from scene cues, and generates empathetic responses using natural language processing. Or in manufacturing, a collaborative robot converses with workers in real time—"Hand me the red tool"—while using object recognition to confirm and act.

By offering "AI-as-a-Service" at the edge, operators could provide scalable, usage-based access to these capabilities. Enterprises get high-performance AI without massive capex on private infrastructure, while telcos monetize their pervasive networks.

Real-World Opportunities and Examples

Consider verticals ripe for this:

  • Retail and hospitality: Robots greeting customers by name (via facial rec), recommending items based on scene analysis, and chatting naturally to assist.
  • Healthcare: Companion bots in hospitals using edge inference to monitor patient environments, detect falls, and converse to provide reminders or emotional support.
  • Logistics and manufacturing: Autonomous robots navigating warehouses, identifying inventory via objects/scenes, and collaborating verbally with human teams.
  • Smart cities: Public service bots patrolling areas, recognizing incidents (e.g., litter or crowds), and interacting with citizens through voice.

These use cases could drive B2B partnerships, where operators bundle connectivity with edge AI compute—potentially adding 10-20% to ARPU through premium services.

Considerations for Carriers

To capitalize, carriers might assess their edge footprints for AI readiness, pilot federated models for privacy, and collaborate with robot vendors or AI platforms. Challenges like energy efficiency and standardization remain, but the rewards in a growing Physical AI market make it worth exploring.