At my first job covering technology, in 1982, I met a librarian who had escaped the stacks.
He set himself as a “digital librarian.” He used the online resources of the time to answer questions from Atlanta corporations and law firms. This was in the days of Lexis, Nexis, and Usenet. These weren’t direct answers. But he was able to show where the resources were to answer the questions, and the questions that needed to be asked to get useful answers. He printed reports and sent them out by courier. He called himself an “information broker.”
I thought about him a lot when the Web was spun and database computing arrived in the early 2000s. He had a lot of knowledge about how data is organized, which I was convinced would be invaluable to the industry.
I thought about him again today when the newsletter Understanding AI offered a piece claiming that, if my old friend were still working (he’d be well into his 70s now) his skills would be obsolete. This is thanks to a new OpenAI capability, quickly copied by others, called Deep Research.
But would he be obsolete? Or would he be more valuable than ever?
Deep or Accurate?
The story is that Deep Research takes time to answer complex queries. In one case, it spent 28 minutes, consulting 21 sources, replicating what was behind a paywall, and producing an architectural checklist that would have taken a professional researcher 10 hours to create.
It works in the same way the librarian did. It uses what it finds on one document to find another, then another, going “down the rabbit hole” until it develops expertise. This is called reinforcement training.
Reinforcement training has delivered dramatic improvements to AI over the last two years, writes editor Timothy Lee. Lee writes letting the machine “think for itself” creates better training data, and better results over time. He’s right.
But how accurate are these results? LLMs are still making stuff up. Gary Marcus writes that many responses have “very subtle, and difficult to detect errors.” He concludes, “the more excited people are about LLMs, the more I wonder how carefully they have examined the output.”
There’s a lot of money to be made, auditing what LLMs do, double checking their work, in effect grading it. When it comes to research questions this is work that can only be done through the magic of library science, and we must not lose that discipline.
This is true in all areas of life.
The “geniuses” behind AI think they’re going to eliminate jobs and intellectual functions. But they’re only making the disciplines I saw over 40 years ago more important, more vital, than before.
Some in the business are catching on. Here’s a pitch from a company called Bounti, offering sales automation. Slap this across the industry’s face until it understands.
“Let AI handle the busywork so you can show up as the best version of yourself.”