-->
  • May 28, 2019
  • By Ian Jacobs, vice president and research director, Forrester Research

Conversational AI Should Speak Plainly and Carry a Big Meaning

Article Featured Image

‘Don’t say it in Russian/ Don’t say it in German/ Say it in broken English’ —Marianne Faithfull (‘Broken English’)

If my columns of late all seem to be about chatbots and automation, that’s because these seem to be the only issues that my clients want to talk about. I’ve already discussed why humans will still matter even in a world of conversational artificial intelligence (see “How Chatbots Can Create a New Kind of Agent” in the May issue)—and may even become more critical to customer interactions. But now I think it’s time to discuss something that probably seems tautological: why in a world of conversational AI, conversational AI needs to actually be conversational.

One of the ways that vendors try to get companies excited about conversational AI is by promising that the solutions will ingest whatever information the organizations have that could provide answers to customers as a training set: transcripts of customer service phone calls, chatlogs, online FAQs, knowledge articles, and even product manuals. But these sources just aren’t the same. Here’s an example of why:

“For all flights covered by this Plan, United will ensure that passengers on the delayed flight receive notification beginning 30 minutes after departure time (including any revised departure time that passengers were notified about before boarding) and every 30 minutes thereafter that they have the opportunity to deplane from an aircraft that is at the gate or another disembarkation area with the door open if the opportunity to deplane actually exists.”

That very short excerpt is pulled from United Airlines’ much longer FAQ that details its “Tarmac Delay Contingency Plan.” The way in which that excerpt—and the FAQ itself—is written bears little resemblance to how one human being would write or talk to another, unless one of those human beings was a lawyer writing in a lawyerly context.

To be clear, I’m in no way grousing about United Airlines—shocking, I know. Really, I’m not; that passage seems completely fine as part of an online FAQ. But if a company decided to vacuum up FAQs to power its conversational AI, the results would be, well, not remotely conversational. Compare that to the results generated from using chat transcripts to train conversational AI. Those utterances sound like something a real person would actually say because a real person actually said them.

Now some would argue that stilted language and grammar doesn’t matter if customers get the answers to their questions. In fact, vendors have argued exactly this point to me. I just don’t happen to agree. Not even one little bit!

Conversational AI makes an experiential promise to the consumer. If it looks like webchat or messaging, it should feel like webchat or messaging. And spitting out six-paragraph knowledge articles or technobabble from a manual or exactingly crafted lawyerese just doesn’t provide that experience. In fact, for the most part, these interactions feel like a bait-and-switch: If I wanted to search your knowledge base, I would have done so. I didn’t, but you still shoved the knowledge base in my face.

So consider this a plea: When you consider moving into conversational AI, focus on fewer use cases so you can control the conversational experience. Better to slowly add content that feels like an actual dialogue between two people than to rush into broad deployments that provide answers, but poor experiences. Serve up too many of these wooden conversations and consumers will start avoiding your chatbots like anthrax. And that ain’t good. Or maybe that should read “in most eventualities, the outcomes would be suboptimal.”  

Ian Jacobs is a principal analyst at Forrester Research.

CRM Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues