6 Comments
User's avatar
Il mecenate dell'IA's avatar

The energy constraint is the most under-discussed part of this whole story. Compute, models, and capital scale fast — grids and power plants don’t. Whoever controls energy and inference efficiency will quietly control the real pace of AI deployment.

Arpit Mittal's avatar

Well written. Thanks for sharing.

Ryan's avatar

Kenn, thank you! This is great.

You just popped up on my radar. I glanced every headline and squint tested the content of the deck.

Saved for a deeper dive.

G88's avatar

Hello, thanks for the writeup.

Please allow me to express some doubts.

First: if agents are so good at accomplishing a given task, and agents in the West are provided by a few companies who sometimes divulge some (unverifiable) revenue numbers, why do they run such little revenue numbers?

Second: no one really knows if the output of an LLM is actually good or not. You can ask the same question in 2 different windows and you'll get very different responses. And there are way more of course.

Kenn So's avatar

Hey G88, thanks for reading. On your first point: I disagree it's little revenue numbers. 99% of companies don't get to 1B in revenue. What would constitute not-little revenue numbers for you? At OpenAI and Anthropic scale, their numbers are audited esp as they prepare for an IPO. They're also the fastest growing companies in history.

On second point, agree. Outputs can vary with a tweak of a word, which is annoying for me too sometimes. But that doesn't mean the output is not helpful. For some work, that's not acceptable (e.g. mathematics for example) but for lots of knowledge work it's totally ok and even 80% directionally right is great.

Duncan's avatar

Love this write up! You highlight the transition from AI-as-assistant to AI-as-worker and cite AI agent's ability to outperform humans in a variety of tasks. Combined with the chart showing faster adoption of AI than other tech, many people are concerned that knowledge workers who are currently employed might soon be obsolete. My question is, do you foresee knowledge workers being replaced by AI agents marketed as full employees in the next 3-5 years, or do you think the next phase of professional AI use will primarily be a higher concentration of responsibility per human employee, facilitated by their assumed use of AI tools and agents? And do you currently see any examples of AI agents able to handle a full typical job description, even in highly impacted industries? I'm curious about how long we have before an agent can be assigned a job description and handle the data collection, analysis, communication, and proactivity needed. Customer support is the one that comes to mind as being fully automated first, but in a sense a decision tree pre-recorded answering service already did that (with lower quality) years ago.