Monday, March 16, 2026

The philosophical problem with agentic AI


Jensen Huang’s address at GTC gave me a lot to think about. So much so that I decided to drive to Sana Cruz for a taste of the ocean. I had to wait 30 minutes to get the table I wanted, just by the beach, in the sun but with a little shade… as I mistype table on my iPad, I am thankful for the autocorrect to sanitize my  somewhat boozy prose, while mostly appreciating the elegantly subtle blue underlying of the word batle, prompting me to consider “is that really what you meant to write, or do you meant table”?

I like that. I like that more than the blue pencil with the little star that insistently offers an AI assisted rewrite. Oh, sure, I am not a native English writer, so my grammar is somewhat tainted by the other 3 languages I might think in at any point in time. If I compound St Patrick and this weekend’s VI nations rugby results for France, you will understand if my writing is not the usual corporate polish. 

Having said that, I was at GTC for the first time, I listen to Jensen’s performance and I was left enlightened and a bit worried. By now, the headline and the sound bite out there must be the $1 Trillion line of sight on chip revenues for Nvidia over the next couple of years. Obviously, it is an extraordinary number. Unfathomable. Impossible to imagine for most of us. Almost impossible to think that we, collectively would spend 125$ ( at 8 billion people) of Nvidia stuff over the next couple of years. Surely that’s impossible. 

Unless this is not about need, but about demand.  Unless that demand is accelerated, compounded, exponentially nurtured beyond its natural curve. 

Essentially, what I retained from the presentation was that the larger the model, the more the interactions, the larger the demand, the faster and more the tokens have to be created to satisfy it. (I am sure AI could rewrite this sentence more elegantly, but screw it). The measurement unit becomes token per Watt,as it is a limiting factor for a given data center and tokens per second as it is the limiting factor for a given service. Jensen even alluded to the fact that they will factor in token per month grants in engineering packages as it becomes a productivity factor. 

The thesis for the 1T$ revenue relies on demand exploding and the emergence of low latency, high I/O token market. Low latency, high I/O is understandable. Multimodal, video models, requiring real time inferencing from vehicles, robots and generally physical AI will drive it. The demand explosion, though, even factoring in the integration of compute and AI in to its, devices, edges… if we look at adoption curves and industrial capacity is decades away,  not in 2 years. Unless…

Unless we are not the demand. Us, consumers, enterprises, industries, governments… Agentic AI and Clawdbot are just showing how, beyond automation, agency becomes a compounding factor. Agents, that you create, for specific purpose are understandable, useful controllable. 

Agents, that interpret your intent, create other agents to enact their interpretation, have access to your digital life, credit card, HR, accounts receivables, invoices, orders, security cameras, GPS movements better be accountable, auditable, controllable. Agents that create fleets of agents to parcel out their workload is where I have doubts. The d’explosion in demand relies on the hypothesis that we will let agents create agents consume tokens to satisfy our needs.

No doubt, we will have agents to control, audit, police agents, but it feels wrong to delegate tasks just because you can or for the concept of efficiency.

This is where the the philosophical debate clashes with the economic model. I learned that hard times create hard men. Hard men create easy times. Easy times create easy men. Easy men create hard times. We might have evolved from this adage, but I feel that, being a kinetic, rather than a literal learner, I’ve learned from trying. I’ve learned from friction. To this day, I write on my notebook with a pen. I don’t forget anything I write. I forget most of what I type. It feels to me that friction is an integral part of the learning experience. More, it is an integral part of the human experience. The taste for effort, trying the hard things, failing is not only what most mankind experience on a daily basis, it is also, at least for me a great  condition to happiness. I am infinitely happier labouring and succeeding than an automated, frictionless, efficient experience. Even with a better result.

As my children are about to enter the workforce, I am confronted daily to the question “what is a safe, fulfilling carrer?”. It used to be that medicine, law, engineering guaranteed a safe economic path. Nowadays, it looks like most entry level intellectual effort can easily, efficiently be replaced, and that agentic AI will only accelerate that trend. How are they supposed to master a domain they won’t be able to tinker and stumble? Maybe I am just an old fart and just like calculators and computers did not replace engineers, a higher level of abstraction will necessitate higher levels of intellectual efforts ? But this feels different. 

Particularly if compute keeps accelerating and artificial intelligence surpasses human intelligence, then what? What is the imperative to learn, labour, try, suffer, if is not necessary? Where do you draw the line between agents that help and augment and agents that enable and replace?

Until then, I’ll keep labouring and burdening you with poorly written posts, but somewhat original or at least unique, because they’re mine. I enjoy this table, i waited 30 minutes for because I chose it and waited for it. I am not sure it would have tasted better should my personal AI butler had booked it for me on my way there.


Wednesday, March 11, 2026

AI is a new G

returned from MWC 2026 with an uneasy feeling.

The telecommunications industry has long been defined by its generational leaps—each "G" marking a profound shift in capabilities, use cases, and societal impact.

2G brought reliable digital voice and SMS, enabling mass mobile communication. 3G introduced mobile data and picture messaging, laying the foundation for internet on the go. 4G powered the explosion of social media, apps, and always-on connectivity. 5G delivered massive bandwidth, fueling high-definition video streaming.

These evolutions followed a predictable cadence governed by 3GPP standards, with operators methodically upgrading infrastructure, spectrum, and devices in multi-year cycles. Parallel to this, the network itself transformed through virtualization: from SDN separating control and data planes, to disaggregating hardware from software, and evolving VNFs (Virtual Network Functions) into cloud-native CNFs (Cloud-native Network Functions). These shifts improved flexibility, scalability, and cost efficiency but remained incremental within the familiar "G" framework.

AI is entering telecom in silos—AI-RAN for spectrum and energy optimization, agentic AI in OSS for autonomous operations and predictive assurance, customer service copilots for intent-based support—delivering proven cost savings (e.g., 25-40% OPEX reductions in network ops, up to 35% energy efficiency). Yet these domain-specific wins rarely connect into a unified, end-to-end intelligence layer. Data stays fragmented across RAN, core, edge, OSS/BSS, leading to duplicated efforts, incomplete visibility, and "agent sprawl" risks. Industry sources highlight how silos impede multi-agent ecosystems and true autonomous networks.

This misconception manifests in several ways:

Viewing AI as incremental tech add-ons — Operators often pursue isolated pilots (e.g., AI-RAN trials, genAI copilots, or agentic OSS agents) expecting quick wins without addressing deeper structural issues.

Underplaying organizational and cultural complexity — AI demands far more than engineering upgrades. It requires breaking down legacy silos (RAN/IT/OSS/BSS), fostering cross-functional agility, upskilling thousands in ML ops/data governance, and driving cultural shifts to trust agentic systems. Cultural resistance, job security fears, and fragmented skills often stall progress, with many projects failing to move beyond pilots (only ~30% of genAI use cases reach production in some analyses). Organizational challenges—including change management and silo-breaking—as top barriers, yet leadership frequently delegates AI to a separate function rather than owning it as a CEO imperative.

Misjudging the scale of change needed — Unlike past "G" evolutions (hardware/spectrum-driven, standardized via 3GPP), AI is a software-defined, data-hungry, adaptive intelligence layer that reshapes workflows, decision-making, operating models, and even business identity (from connectivity provider to intelligent platform). Treating it as "just tech" ignores the need for unified data fabrics, intent-based orchestration, governed multi-agent ecosystems, and radical process redesign—efforts that can take years, not quarters, and demand massive internal rewiring.

New vendors (hyperscalers, specialized AI-RAN players, agentic platforms) disrupt legacy supplier models, while operating models evolve toward intent-driven, cloud-native, agent-orchestrated environments requiring cross-functional agility and new skills. Massive CAPEX uncertainty surrounds compute (GPUs, accelerators), high-bandwidth memory, power, and cooling—often in the hundreds of billions globally—amid unclear ROI timelines and risks like underutilization. AI excels at cost management through optimization, but revenue-generating services (e.g., enterprise AI platforms, GPUaaS, network APIs for AI workloads, personalized offerings) remain nascent for most operators. This imbalance—cost wins without broad revenue upside, vendor shifts, and compute investment risks—demands an AI strategy that starts with organization and operational models, not technology.

This underestimation risks turning AI from a greenfield opportunity into added complexity: persistent silos, agent sprawl, duplicated investments, and missed revenue potential. Proven cost optimizations are real, but without holistic transformation, operators may achieve efficiency gains while remaining commoditized pipes in an AI-driven world. Warning to operators: AI is not "plug-and-play." Underestimating its demands—starting with organization, leadership alignment, operating model redesign, and cultural renewal before heavy technology scaling—will lead to stalled initiatives, wasted CAPEX (especially on compute/infra), and competitive disadvantage. Frontrunners recognize AI as a radical reinvention requiring bold, enterprise-wide commitment; the rest risk being left behind as the intelligence generation unfolds.