Despite its continuing failures, AI nonsense just keeps getting louder.
The latest bit of nonsense is a bet between AI skeptic Gary Marcus and AI advocate Miles Brundage..
Brundage thinks AI will become creative by the end of 2027. He thinks AI will be able to write a coherent legal brief, a Pulitzer-quality book, an Oscar-worthy screenplay, a large bug-free computer program, and come up with paradigm-shifting scientific discoveries.
My argument against this is not that this is impossible, although I’ll take Marcus’ side of the bet. My argument is that this is stupid.
Computers are great just as they are, without the constraints of human intelligence. They shouldn’t be bound by human job descriptions.
Look at a great Christmas movie like The Apartment (1960) and you’ll see what I mean. All those lines of trained accountants were replaced by computers within the decade. Where did they go? To better jobs. Or read William Gibson’s Neuromancer, which today feels more like an alternate history than a futuristic fantasy.
Computer code can encompass whole worlds, from a hospital to a battlefield, from a factory to a city. The idea that technology should be limited to doing what people do is constraining, besides the fact that people already do it.
Today’s Masters of AI have seen too many old sci-fi movies where computers make human beings into drones or slaves. If Grok is going to do that to us, why wouldn’t it do that to Elon Musk? Money that serves only one man, like technology serving just one man, is an unstable autocracy that will deliver only revolution in its wake. If it’s not helping us, we will replace it and should.
What AI Should Be
AI is not and should not be “creative.” But it can be the greatest aid yet devised for human creativity.
Computer code must be tested, eventually through the market. Scientific concepts must be proven by experiment. Legal cases must be argued and adjudicated. Tools that don’t serve human needs shouldn’t exist, but AI can be great for building new tools.
I have no problem with AI doing a literature search. I’m just not going to call the resulting output literature. Any AI program, in fact any program, is limited in what it can do by the strength of its inputs. The law of Garbage In, Garbage Out always applies. The ore of human knowledge contains enormous piles of garbage, and it’s clear that AI software can’t refine it all out. Nor would we want it to.
But AI can organize. AI can coordinate. AI can collect information and deliver it in a hundred different ways. This assumes, of course, that we make AI our servant and not our master. This assumes that the masters of AI serve humanity. When they don’t, we don’t toss the AI. We get new masters.