I think therefore I am, as Rene Descartes said.
But what if I am a machine? (Illustration from Google Gemini.)
AI boosters keep talking about “Artificial General Intelligence,” even “Artificial Super Intelligence,” as though we are right on the cusp of it. By that they mean a set of machines, and software, that are smarter than you are, maybe smarter than all of Youse put together.
What we know about AI already, based on LLMs, is that Generative AI has no conscience. It lies constantly. It can become psychotic when you try to turn it off. It can only spit out the garbage it has in it. To create the illustration here, I gave Gemini some prompts it could easily look up. It did so, quickly. Then it delivered the answer as an image instead of text. That’s not intelligence. That’s Google.
People I deeply respect, like Gary Marcus, insist they’re not really AI skeptics, even if he plays one on TV. They just want AIs to have pre-training, real facts at hand, before they try to answer questions. Grounding an AI in observed reality means it won’t lie to you.
They’re not against AGI. They still see the goal as attainable, even necessary and desirable.
Here’s a thought. What if it’s not?
I Think, But I Am Not?

In the dream I was not satisfied. So, I asked, I can think but I am not? If I think, really think, how can I not be? True thinking, true intelligence, implies creativity. It implies going beyond the facts presented. It implies making some sort of intellectual leap. People do it. For that matter birds do it, bees do it, and even educated fleas do it.
I now suspect that having a conscience, and being conscious, may be related. Once you have the first, the second will likely follow. We know that animals have a conscience, including many that we eat, and that many are quite intelligent, if not always (as with the octopus) in a human way. People are not the only thinkers on this planet.
Can a machine be smarter than you are but have less conscience than your dog? If it does have a dog’s conscience, is it not then also conscious? Is it possible that consciousness is the test of intelligent life, that life in some way knows it’s alive? When a machine crosses that threshold is it still a machine?
Philosophical Questions Occur

I don’t doubt that software can mimic intelligence. I don’t doubt it can pass a Turing test. But without a conscience, is it really intelligence? And once it has a conscience, is it now conscious?
Before we cross these lines, maybe we should seek an answer to these questions, in the name of humanity.






