Everyone and Dolly Parton is afraid of Big AI programs that can replace people.
But even Little AI programs can be scary. They need to know a lot about you to work. If I am to give you personalized service, after all, don’t I need to know something about whom I’m dealing with? What do you like, what don’t you like, what do you really want, what have you bought before? That’s what a good clerk, or a good bartender, will do for you.
In a recent Information Week slideshow on AI, Pam Baker wrote about these “autonomous agents” that supercharge customer service.
They’re working to solve what marketing expert Jim Sterne called the “Suzy” problem a quarter century ago.
Suzy
Suzy was an intern. She was given incoming e-mails from customers, voicing complaints no one had time to deal with. Suzy dealt with them, getting specific answers from the right people, and signed her e-mails “Suzy.” When she went back to college, customers complained that customer service had gone away. Where’s Suzy, they asked? In response the company created a whole department to deal with complaints, and everyone signed their e-mails “Suzy.”
Now Suzy is a bot. Suzy Bot can use the company’s whole database to find answers to incoming questions. She can act, then tell people about what she did, in a conversational style. To do her job she must use a company’s entire data store, starting with everything it knows about the customer asking the question.
Some people will freak about this. They shouldn’t. Suzy is just automating something that needs automating, something companies couldn’t afford to automate before.
What happens when you turn Suzy inward? What do you think about a human resources Suzy, who might act on employee complaints or advise management on whom to dump in a lay-off? It’s the same software, and all the “learning data” is the property of management. But how much autonomy do you give this Suzy?
Who Polices the AI?
The New Yorker has been writing about AI all year. One of the best pieces is James Somer’s recent effort, which starts with the worry that coders will be replaced. He reluctantly draws a benign conclusion, that AI just makes coders more productive by handling mundane tasks.
It reminded me of my wife, who started as an assembly programmer, graduated to COBOL, then to Java, and now oversees changes to her company’s entire application system. Her life with the code gave her value, but if her present job were replaced by AI, she’d just graduate to something even more useful.
This is true for all of us. In the end, computers and software serve us. That’s their function. And to answer Somer’s final question, about what he should teach his own kid, teach them to think. Degrees in the humanities and liberal arts are going to become awfully valuable in the future, once all the rudiments of technical work are automated. Someone must govern the Ais.