Monday, March 23, 2026

From AI-Native to Agentic-Native Networks

Recent announcements at NVIDIA's GTC 2026—including major pushes into agentic AI frameworks like OpenClaw, NemoClaw, and agentic systems for reasoning, planning, and autonomous action—have reinforced several convictions I've held about the trajectory of AI-native infrastructure, especially in telecom and networked industries. We're seeing the emergence of two distinct paradigms:

  • AI-Native networks that observe, detect, optimize, and predict in real time. These systems augment human decision-making, providing powerful assistance in planning, deploying, and managing both physical and virtual infrastructure.
  • Agentic-Native networks, by contrast, eliminate the human-in-the-loop entirely. When equipped with real-time data access, transactional capabilities, and fulfillment capacity, they execute at the speed permitted only by the slowest link in the supply chain.

This second model doesn't just accelerate execution—it fundamentally reprices time itself as a competitive asset.

As Jordi Visser articulates in his insightful piece "The Repricing of Time: Equity in the Age of Agents", agentic AI compresses competitive cycles dramatically. Velocity of execution no longer merely helps fulfill a plan faster; it redefines the playing field. When capabilities can be reconfigured almost overnight through model iterations or agent orchestration, durable moats erode. What once took decades to build—layered expertise, entrenched positions, regulatory barriers—can now be challenged or leapfrogged in months.

In this environment, equity behaves more like a call option on execution speed than on long-duration stability. "Execution speed replaces installed base. Iteration cadence replaces headcount." The advantage shifts decisively toward those who can pivot, adapt, innovate, and execute rapidly.

This dynamic hits telecom particularly hard.

Most operators are desperate to escape the "utility trench"—the low-margin, commodity perception that has trapped connectivity providers for years. They aspire to new revenue streams beyond pipes and bandwidth.

From my own experience modeling, teaching, and advising organizations on this challenge (see earlier pieces on innovation micro-strategies, telco relevance and growth, and the lean telco), there is no single silver bullet. No grand transformation program that magically reinvents the business.

Instead, the path forward involves thousands of micro-services and experiments: create, test, fail fast, pivot, scale the winners, and launch repeatedly. The era of one-size-fits-all offerings is over.

Agentic-native networks offer exactly the infrastructure to make this high-velocity approach viable at scale. They enable rapid creation, iteration, value capture, and deployment—turning velocity, flawless execution, and clear strategic vision into the new currency that outcompetes inertia, legacy systems, and eroding differentiation.

For telecom leaders, the message from GTC 2026 is clear: agentic AI can free up resources and help accelerate innovation at scale. Those who embrace this shift—building or partnering for agentic capabilities—will be the ones that don't just survive the repricing of time, but help define the next era of networked value creation.

Monday, March 16, 2026

The philosophical problem with agentic AI


Jensen Huang’s address at GTC gave me a lot to think about. So much so that I decided to drive to Sana Cruz for a taste of the ocean. I had to wait 30 minutes to get the table I wanted, just by the beach, in the sun but with a little shade… as I mistype table on my iPad, I am thankful for the autocorrect to sanitize my  somewhat boozy prose, while mostly appreciating the elegantly subtle blue underlying of the word batle, prompting me to consider “is that really what you meant to write, or do you meant table”?

I like that. I like that more than the blue pencil with the little star that insistently offers an AI assisted rewrite. Oh, sure, I am not a native English writer, so my grammar is somewhat tainted by the other 3 languages I might think in at any point in time. If I compound St Patrick and this weekend’s VI nations rugby results for France, you will understand if my writing is not the usual corporate polish. 

Having said that, I was at GTC for the first time, I listen to Jensen’s performance and I was left enlightened and a bit worried. By now, the headline and the sound bite out there must be the $1 Trillion line of sight on chip revenues for Nvidia over the next couple of years. Obviously, it is an extraordinary number. Unfathomable. Impossible to imagine for most of us. Almost impossible to think that we, collectively would spend 125$ ( at 8 billion people) of Nvidia stuff over the next couple of years. Surely that’s impossible. 

Unless this is not about need, but about demand.  Unless that demand is accelerated, compounded, exponentially nurtured beyond its natural curve. 

Essentially, what I retained from the presentation was that the larger the model, the more the interactions, the larger the demand, the faster and more the tokens have to be created to satisfy it. (I am sure AI could rewrite this sentence more elegantly, but screw it). The measurement unit becomes token per Watt,as it is a limiting factor for a given data center and tokens per second as it is the limiting factor for a given service. Jensen even alluded to the fact that they will factor in token per month grants in engineering packages as it becomes a productivity factor. 

The thesis for the 1T$ revenue relies on demand exploding and the emergence of low latency, high I/O token market. Low latency, high I/O is understandable. Multimodal, video models, requiring real time inferencing from vehicles, robots and generally physical AI will drive it. The demand explosion, though, even factoring in the integration of compute and AI in to its, devices, edges… if we look at adoption curves and industrial capacity is decades away,  not in 2 years. Unless…

Unless we are not the demand. Us, consumers, enterprises, industries, governments… Agentic AI and Clawdbot are just showing how, beyond automation, agency becomes a compounding factor. Agents, that you create, for specific purpose are understandable, useful controllable. 

Agents, that interpret your intent, create other agents to enact their interpretation, have access to your digital life, credit card, HR, accounts receivables, invoices, orders, security cameras, GPS movements better be accountable, auditable, controllable. Agents that create fleets of agents to parcel out their workload is where I have doubts. The d’explosion in demand relies on the hypothesis that we will let agents create agents consume tokens to satisfy our needs.

No doubt, we will have agents to control, audit, police agents, but it feels wrong to delegate tasks just because you can or for the concept of efficiency.

This is where the the philosophical debate clashes with the economic model. I learned that hard times create hard men. Hard men create easy times. Easy times create easy men. Easy men create hard times. We might have evolved from this adage, but I feel that, being a kinetic, rather than a literal learner, I’ve learned from trying. I’ve learned from friction. To this day, I write on my notebook with a pen. I don’t forget anything I write. I forget most of what I type. It feels to me that friction is an integral part of the learning experience. More, it is an integral part of the human experience. The taste for effort, trying the hard things, failing is not only what most mankind experience on a daily basis, it is also, at least for me a great  condition to happiness. I am infinitely happier labouring and succeeding than an automated, frictionless, efficient experience. Even with a better result.

As my children are about to enter the workforce, I am confronted daily to the question “what is a safe, fulfilling carrer?”. It used to be that medicine, law, engineering guaranteed a safe economic path. Nowadays, it looks like most entry level intellectual effort can easily, efficiently be replaced, and that agentic AI will only accelerate that trend. How are they supposed to master a domain they won’t be able to tinker and stumble? Maybe I am just an old fart and just like calculators and computers did not replace engineers, a higher level of abstraction will necessitate higher levels of intellectual efforts ? But this feels different. 

Particularly if compute keeps accelerating and artificial intelligence surpasses human intelligence, then what? What is the imperative to learn, labour, try, suffer, if is not necessary? Where do you draw the line between agents that help and augment and agents that enable and replace?

Until then, I’ll keep labouring and burdening you with poorly written posts, but somewhat original or at least unique, because they’re mine. I enjoy this table, i waited 30 minutes for because I chose it and waited for it. I am not sure it would have tasted better should my personal AI butler had booked it for me on my way there.