Poor Call Center Technology Can Impact Customer Loyalty
Call center agents are not always customers' first point of contact. For companies that use an IVR, the technology can make or break an interaction.
Most contact centers use one-dimensional metrics like time-to-respond or number-of-calls-handled to gauge success or failure. That's simply not sophisticated enough, according Frank Moreno, director of contact center solutions at Empirix, which hosted a customer event to discuss best practices for monitoring call center technologies.
To date, Moreno said during the daylong seminar, automated contact centers haven't been properly rated according to what truly matters: end-to-end service level management. "Efforts have been focused on the [contact center] agent," he said, rather than on the systems that guide calls to those agents. Hardware uptime, he noted, is hardly equivalent to software performance. What's required is quality assurance for automated systems, providing benchmarks for service that the systems have to live up to.
"You need to watch your system," Mark Danzenbaker, senior manager of contact center technology for consultancy Accenture, said during his keynote presentation.
In New York City, where Accenture helped set up and continues to help maintain a nonemergency municipal 311 help line, nearly 8 million calls have been fielded since the system was inaugurated last year. Live agents answer every call, 95 percent of which are answered within three rings, according to Danzenbaker. He also said the key to maintaining service on that scale is testing. "Our first couple of tests--well, I'm not ashamed to tell you, they were a nightmare. I cannot imagine ever putting in a service line without performance-testing it first."
In fact, Danzenbaker believes that several tests--a minimum of three--are the key to success. The first round will uncover problems; the second should test the solutions put in place to address those problems; and a third should produce a completely clean run. If the third test turns up any problems at all, the cycle should be repeated, Danzenbaker said.
Other lessons he's learned are to start small and ramp up, and "to test across the entire system, not just components [of it]." He also said that time should be taken at the start to ensure the testing process itself is designed properly, accurately stressing the business processes involved. Knowing there's a problem in the first place is the only way to adequately address it, he said.
As Moreno put it, "Reactionary is unacceptable," and being proactive is the only way to avoid system failure.
At financial services firm CIBC, it used to take one to two hours simply to be made aware a problem existed--and then only if a customer reported it. "Now we know about it in less than five minutes," Charbel Safadi, telecommunications analyst for CIBC's card products division,, said during his presentation, "and we're not relying on the customer to tell us we have a problem."
Safadi said his overall aim has little to do with the technology itself. "Our goal is to ensure customer satisfaction," he said. "That's what it comes down to. If they're not satisfied, there are lots of other places they can go to."
Testing the automated voice systems to gauge technical availability is "the front door" to everything, Safadi said. Through that door is customer satisfaction, and on the other side of it lies customer loyalty.
Forrester Unveils Consumer Technology Benchmark
Survey results of more than 40,000 American households shows that technology will continue to see strong adoption in daily life.