Jensen Huang’s address at GTC gave me a lot to think about. So much so that I decided to drive to Sana Cruz for a taste of the ocean. I had to wait 30 minutes to get the table I wanted, just by the beach, in the sun but with a little shade… as I mistype table on my iPad, I am thankful for the autocorrect to sanitize my somewhat boozy prose, while mostly appreciating the elegantly subtle blue underlying of the word batle, prompting me to consider “is that really what you meant to write, or do you meant table”?
I like that. I like that more than the blue pencil with the little star that insistently offers an AI assisted rewrite. Oh, sure, I am not a native English writer, so my grammar is somewhat tainted by the other 3 languages I might think in at any point in time. If I compound St Patrick and this weekend’s VI nations rugby results for France, you will understand if my writing is not the usual corporate polish.
Having said that, I was at GTC for the first time, I listen to Jensen’s performance and I was left enlightened and a bit worried. By now, the headline and the sound bite out there must be the $1 Trillion line of sight on chip revenues for Nvidia over the next couple of years. Obviously, it is an extraordinary number. Unfathomable. Impossible to imagine for most of us. Almost impossible to think that we, collectively would spend 125$ ( at 8 billion people) of Nvidia stuff over the next couple of years. Surely that’s impossible.
Unless this is not about need, but about demand. Unless that demand is accelerated, compounded, exponentially nurtured beyond its natural curve.
Essentially, what I retained from the presentation was that the larger the model, the more the interactions, the larger the demand, the faster and more the tokens have to be created to satisfy it. (I am sure AI could rewrite this sentence more elegantly, but screw it). The measurement unit becomes token per Watt,as it is a limiting factor for a given data center and tokens per second as it is the limiting factor for a given service. Jensen even alluded to the fact that they will factor in token per month grants in engineering packages as it becomes a productivity factor.
The thesis for the 1T$ revenue relies on demand exploding and the emergence of low latency, high I/O token market. Low latency, high I/O is understandable. Multimodal, video models, requiring real time inferencing from vehicles, robots and generally physical AI will drive it. The demand explosion, though, even factoring in the integration of compute and AI in to its, devices, edges… if we look at adoption curves and industrial capacity is decades away, not in 2 years. Unless…
Unless we are not the demand. Us, consumers, enterprises, industries, governments… Agentic AI and Clawdbot are just showing how, beyond automation, agency becomes a compounding factor. Agents, that you create, for specific purpose are understandable, useful controllable.
Agents, that interpret your intent, create other agents to enact their interpretation, have access to your digital life, credit card, HR, accounts receivables, invoices, orders, security cameras, GPS movements better be accountable, auditable, controllable. Agents that create fleets of agents to parcel out their workload is where I have doubts. The d’explosion in demand relies on the hypothesis that we will let agents create agents consume tokens to satisfy our needs.
No doubt, we will have agents to control, audit, police agents, but it feels wrong to delegate tasks just because you can or for the concept of efficiency.
This is where the the philosophical debate clashes with the economic model. I learned that hard times create hard men. Hard men create easy times. Easy times create easy men. Easy men create hard times. We might have evolved from this adage, but I feel that, being a kinetic, rather than a literal learner, I’ve learned from trying. I’ve learned from friction. To this day, I write on my notebook with a pen. I don’t forget anything I write. I forget most of what I type. It feels to me that friction is an integral part of the learning experience. More, it is an integral part of the human experience. The taste for effort, trying the hard things, failing is not only what most mankind experience on a daily basis, it is also, at least for me a great condition to happiness. I am infinitely happier labouring and succeeding than an automated, frictionless, efficient experience. Even with a better result.
As my children are about to enter the workforce, I am confronted daily to the question “what is a safe, fulfilling carrer?”. It used to be that medicine, law, engineering guaranteed a safe economic path. Nowadays, it looks like most entry level intellectual effort can easily, efficiently be replaced, and that agentic AI will only accelerate that trend. How are they supposed to master a domain they won’t be able to tinker and stumble? Maybe I am just an old fart and just like calculators and computers did not replace engineers, a higher level of abstraction will necessitate higher levels of intellectual efforts ? But this feels different.
Particularly if compute keeps accelerating and artificial intelligence surpasses human intelligence, then what? What is the imperative to learn, labour, try, suffer, if is not necessary? Where do you draw the line between agents that help and augment and agents that enable and replace?
Until then, I’ll keep labouring and burdening you with poorly written posts, but somewhat original or at least unique, because they’re mine. I enjoy this table, i waited 30 minutes for because I chose it and waited for it. I am not sure it would have tasted better should my personal AI butler had booked it for me on my way there.

No comments:
Post a Comment