Generative AI lies. It’s as simple as that.
Systems like Grok and ChatGPT tell their users what they want to hear, often with no connection to reality. The hope of Neuro-Symbolic AI (NSAI) is to create that reality. (Image is from Google’s Gemini.)
Given that goal, and the endorsement of experts like Gary Marcus, it’s surprising how much of a backwater NSAI remains. There are startups emerging from stealth but most are seeking niches where there might be paying customers, rather than drinking deep at the venture capital trough.
There are reasons for this. One is the assumption that the big boys will do the work, and that if they haven’t it’s impossible. Another is that it will still take an enormous symbolic database to create a general-purpose solution, thus NSAI has to piggy-back on Generative AI anyway.
The whole point of NSAI is to organize reliable data and test GenAI conclusions against it, acting as a professor to GenAI’s precocious student. That’s why you’re seeing NSAI applied in verticals like insurance and healthcare, where the costs of being wrong are very high.
The Bottom Line
But the law of Garbage In, Garbage Out (GIGO) still applies. Quality data remains a more essential component than cunning algorithms, and how can you assure that GenAI will listen to its teacher or that its teacher won’t make mistakes as well?
The business of cleaning databases, assuring accuracy in collections of pointers based on databases, and guaranteeing that connections work is the real new frontier.
It’s just not sexy. The cost of being wrong will increase until there’s a greater premium for being right.







