Billions of dollars are being invested today in Nvidia data centers and software, aiming at something called Artificial General Intelligence (AGI).
AGI is the “Holy Grail” of ChatGPT, of Grok, of Gemini and Claude.
The claim is that AGI can replace people, reasoning better on any problem than you and I do.
The goal is both nonsense and has long been achieved.
Training a database to recognize patterns and answer free flowing questions is the method used by Large Language Models in their quest for AGI. But we’ve known for over a year that it’s limited. The output of the models improves, but only slowly. It’s like a bullet being fired into a gel, its momentum dissipating, eventually stopping, so you can pull the bullet out and analyze it.
Human reasoning isn’t like GenAI. It doesn’t depend entirely on its input, as an LLM must. We routinely engage in leaps of creativity that don’t depend on input, or that handle the input in unexpected ways. I do this with words, you may do this with numbers or images. Babies do all this intuitively, using relatively little energy and small inputs.
My second problem is that the goals of AGI, in terms of making rational sense out of enormous inputs, have long been achieved. Google may be limited to specific types of inputs and outputs, but it knows more than you ever will, and its results come at you in a moment or two. If the inputs are known and the outputs defined, we have long been able to put machines to work running large systems it took whole bureaucracies to manage previously.
The Problem is Old
The same problems that occur with machines, which we’ve had since the Industrial Revolution, occur with computers.
Engines can lift and manipulate objects with far more facility than even an enormous team of people can. But they still need human direction, operators who know the job and can step in when needed.
Computer programs can engage with enormous data sets and deliver far more complex outputs than any human being. Palantir doesn’t do AI as such. It’s pure database programming. But with enough data it can run a hospital, a battlefield, or an election campaign.
That doesn’t mean software can or should replace people. It’s an assistant, just as machines and engines are assistants. It augments our capabilities. It doesn’t replace us. What would be the point?
In 2001: A Space Odyssey, the computer named HAL can’t reason or function outside its spaceship home. When it runs into input it can’t handle, it tells Dave no. Dave goes forward anyway, “killing” HAL, and in the process discovers what’s on the other side of it.
Artificial General Intelligence has been mislabeled, misconstrued, and oversold. Why would we want a computer to replace us anyway? What would be our function if it did? We asked the same questions of the industrial revolution 100 years ago, Charlie Chaplin going inside the machine in his movie Modern Times.”
AI needs a new vision. Let the computer do what computers do, aid people in their work. Stop pretending AGI is going to “replace” people. The speculators who claim this is possible or even desirable only seek to replace your liberty with their license.