For years now, Gary Marcus has been calling upon Artificial Intelligence researchers to turn away from Large Language Models and explore the human mind.
The industry is starting to do this.
But the move is threatened by geopolitics. We don’t like the people using the new approach.
One of the leading scientists studying a different way of doing AI is Song-Chun Zhu, formerly of UCLA. He advocates for what he calls a “small data, big task” approach.
We use this approach all the time. We defend it as “common sense.” It doesn’t always work. The approach can send us down rabbit holes. But when your bicycle is stuck in traffic, or when you don’t know the right way to get the answer on a test, it’s what we do.
I asked Google’s Gemini about small data, however, and it came up with a lot of gobbledygook. Gemini responded based entirely on what Large Language Models (LLMs) do, treating small data as a problem rather than an opportunity. Its ultimate answer was to “use small data to empower human experts, rather than fully replacing them.”
But that result isn’t AI. It’s what computers do already. The mainstream is still dismissing a new approach.
Is Small Data Communist

The resources he now has in Beijing are over-the-top. But his ideas are infusing academic curriculums and Chinese government policy, which hopes small data can make big leaps in areas from factory productivity and elder care.
It makes sense. Take the data as far as it goes, implement the best possible solutions using the data, then study how people take the final step, how we handle the risks of being wrong, and how we change our minds.
Because Zhu is Chinese, and because his new institute is Chinese, there’s a reluctance on the part of American scientists to follow the same path, even to acknowledge the importance of his work. Instead of making the turn Marcus has suggested, we’re seeing start-ups just jump among the large models.
They’re building committees.
Humility Needed

We associate breakthroughs with individuals when they’re really the product of teams. We think of “winning” as something only one side can do, when scientists know that it’s in cooperation that the magic happens. Make the competition as open as possible, cooperate across disciplines, across cultures, across approaches. You get there by working together.
But that seems anathema in our politics today, and increasingly it seems impossible in academia. Our assumption that there can only be one winner is an entirely false assumption. But it’s the assumption we’re all using.
AI needs humility. Even while cash is raining down on it.







