Real-Time Analytics Provides ‘Quality Assurance’—and Privacy Concerns
‘I always feel like somebody’s watching me / And I have no privacy’
—Rockwell (‘Somebody’s Watching Me’)
Late last year, the American Psychological Association released the results of its annual “Stress in America” poll. It became clear that the state of the nation was totally freaking Americans out. Nearly two-thirds of respondents cited the future of the nation as a very or somewhat significant source of stress—amazingly, that was a higher percentage than those stressed out by money and work! While I’m no mental health expert—in fact I only have passing acquaintance with mental health at the best of times—I know that can’t be good.
I bring this up because…well, because its fascinating, no? But I also mention it because I’d like to distract any of you fretting over the prospects for the world by giving you a dose of paranoia about the future of our brand interactions. Seems a fair trade.
We know companies monitor our interactions with them. We’ve all become acclimated to the message “This call may be recorded for quality assurance purposes.” Pretty much all our calls are recorded, even if only a tiny percentage are reviewed for those “quality assurance” purposes. In many ways, these quality-focused messages are misleading; many brands record the calls for regulatory compliance reasons, or to have legal backup in any future dispute with the customer. But at least companies feint toward capturing the recordings for our benefit. As technology evolves to allow speech analytics tools to be used in real time, that quality assurance message might need to change.
Some companies use technology that allows them to create personality and communication-style models of each customer and service rep. They then use the models to connect callers to agents best suited to communicate with them. The models that drive the personality matching are derived from previous recordings. Remember those quality assurance recordings? They can be used to classify personality types.
To be clear, such categorization can clearly improve customer service. I’ve certainly struggled through calls with companies in which communicating with the agent felt as impossible as sailing the English Channel in a teacup. Connecting me with an agent who just gets me would be a great thing. But since the tools are constructing models of our personalities that can be reused by other companies, shouldn’t we have a say in the process? Shouldn’t this be an opt-in deal? Or once created, shouldn’t we be able to have companies forget these communication-style models, GDPR-style? At a minimum, calling this a “quality assurance” purpose in the recorded disclosure doesn’t really cut it.
This use of real-time analytics technology might only be the first problematic glimmer of privacy concerns we will face. For example, we are starting to see companies use real-time tools to gauge our emotional state moment-by-moment. The idea behind such analysis is to provide guidance to agents to better provide us service or locate the ideal moment to make an upsell offer. If agents talk over customers too often, they get a prod to let customers finish their thought. Yeah, I know this sounds like basic stuff, but these emotional awareness nudges can be powerful. If a company has records of my emotional swings on a call, however, what’s stopping it from using them to construct a model of my emotional patterns that it can reuse beyond that single call? At minimum, we deserve a heads up that this is happening and the ability to (emotionally) say, “Heck no! I don’t want you trying to map out my emotional landscape like some corporate psychotherapist.”
Because these tools really can benefit consumers, all it would take is a little explaining on the part of companies and many of us would opt in. Give us a little enlightened self-interest and we’ll let them classify us all they want.
Ian Jacobs is a principal analyst at Forrester Research.