In a 20th century war, Ukraine would have been lost years ago. (Image from Google Gemini.)
Those wars were about logistics, about getting men and material to the front. Russia won those kinds of wars. They won World War II. They kept Eastern Europe occupied until their economy collapsed.
But here is Ukraine in 2026, not only standing tall after four years but teaching the Gulf Arabs and NATO how to fight, and win, against a larger and ruthless enemy.
There are lessons here that go far beyond the battlefield. I believe the most important lessons go to the heart of our economic insecurity.
The Ukraine war began in February 2022, 9 months before OpenAI demonstrated its Large Language Model ChatGPT, kicking off the AI era. AI claims it automates intelligence the way the steam engine automated physical work. The optimists harp on this because the Industrial Revolution created enormous wealth and, eventually, enormous prosperity. (It also created enormous wars and enormous suffering, blowing off wealth the way an oilwell blows off methane.)
But intellectual labor is not like physical labor. Ukraine demonstrates this every day. The country has become the modern arsenal of democracy, constantly adapting, improving, and changing in the face of the Russian onslaught. Ukraine’s genius isn’t mass production, but mass adaptation. Its success doesn’t come from its cash reserves, but from the unity of its people, their resilience in the face of horrors that would put most of us into catatonia.
How does this relate to AI?
LLMs Are Russian

LLMs are all about answers, but they don’t know how to ask questions. They certainly can’t figure out what questions to ask. Only people can do that.
LLMs are like tanks. They’re like ships and planes. They’re all vulnerable to drones.
Russia has drones, but Ukraine has figured out the game. It did this through constant adaptation and improvisation. This is another skill no LLM has.
But everyone can see through LLMs. AI Agents aren’t flawless. either. I know Yann LeCun is working on great things with his “world models,” which are supposed to keep AI answers in some kind of bounds. This will help, but it won’t change the fact that AI is just a tool, the way a steam shovel is a tool. It needs an operator.
The same things that were true 200 years ago in the Industrial Revolution are true today for the AI Revolution. We’re building the equivalent of iron rails and boilers that are heavy and prone to explosions. The difference is we’re deploying this technology, willy-nilly, with no thought to the dangers.
Be Ukraine

I think this is how AI will play out, and it’s why Mark Zuckerberg is acting so scared right now, throwing money at AI “talent” and trying to reorganize himself into dominance. Central control doesn’t work. You must give people on the line the power to make change. That’s how you adapt to rapid change.
Bureaucracies are going to lose the AI battle. Most of the Cloud Czars won’t get across the divide. They’re becoming AT&T, giant phone networks. That’s still a powerful position, but the big value add comes from software, and it comes from using software creatively, solving problems by controlling it, not letting it control you.
It’s true we’re in the early innings of the AI Revolution, just as we’re in the early innings of AI War. But Ukraine has already shown us how this is going to go. Creativity can be massively expanded with AI tools, but AI is not truly creative and may never be creative.
You’re not John Henry, and Catbert does not control your fate. The tools of the future are there to be used, and those who use them best will win.







