-->
  • January 20, 2026
  • By Ian Jacobs, vice president and lead analyst, Opus Research

Too Much, Too Little, Too Late: Rethinking ‘100 Percent Coverage’ in Contact Center QA

Article Featured Image

“Too much, too little, too late to try again with you” —Johnny Mathis and Deniece Williams (“Too Much, Too Little, Too Late”)

In the quality world of contact centers, we used to live in the land of 2 percent. A handful of calls per agent per month, chosen at random, and we called it “QA.” It was closer to QA tourism than QA practice. You parachuted in, took a few snapshots, and left with a vague sense that things were “fine.”

Then conversation intelligence vendors showed up promising the opposite extreme. “We analyze 100 percent of your customer interactions!” On the surface, that sounds like progress. If a little data is good, surely all the data must be amazing.

That’s not the right framing.

The questions that matter are simple. What are you trying to do with the data? Where does human attention move the needle? Once you start there, “100 percent or bust” begins to look more like a marketing slogan than a design principle.

Sometimes, averages are not your friend. Let’s start with agent performance. For years, QA programs chased a statistically valid “view” of the agent. Enough calls to say their average handle time, sentiment, or quality score was this number instead of that number.

The problem is that averages are boring. You don’t really care that an agent’s “typical” call is fine. You care about the handful of calls that almost triggered a complaint, quietly violated a disclosure, or were so good you want to frame them in the break room.

Those are long-tail events. They do not show up very often, but they punch far above their weight in impact to customer experience, compliance, and revenue. A 2 percent random sample will miss most of them. But that doesn’t mean you need a human to review all interactions, either. No supervisor has that kind of time, and if they did, they would probably use it to find a new job.

What you need is analytics running across most or all interactions that automatically surfaces the outliers. Then humans can spend their limited attention on the strange, the risky, and the brilliant. Full coverage can be helpful, but primarily as machine coverage, not human coverage.

Now flip the lens and look at where conversation intelligence feeds automation. When you are deciding which use cases to automate, or what data to feed into your AI training pipeline, coverage starts to matter in a different way.

If you only see a thin slice of interactions, your bots will be trained on the happy path and a couple of cute exceptions. The first time a seasonal issue appears, or a niche product line causes chaos, or one specific region invents its own vocabulary, your well-trained model suddenly looks like it needs a hug.

Here, edge cases are not just annoyances. They are exactly the kind of thing that breaks customer trust in automation. Broad coverage helps you map real-world variation. Not just the top 10 intents, but the weird little pockets of demand that either need better handling or should be deliberately kept away from the bots.

This still does not automatically require 100 percent forever. But it clearly does argue for much higher coverage than the old QA approach, especially during discovery and model training. So, what is the sane middle ground between 2 percent and 100 percent?

In practice, more mature programs tend to ingest and analyze as close to 100 percent as is economically sensible with machines, use targeted and stratified sampling for human review instead of coin-flip random sampling, and aim human attention at outliers and emerging patterns rather than checking boxes on a QA form.

In that model, “100 percent” is a ceiling, not a belief system. You want the ability to analyze everything. You do not want to treat everything the same way. So when a vendor proudly says, “We cover 100 percent of your interactions,” the follow-up should be simple: “Great. How do you help us not treat 100 percent of those interactions equally?”

If the answer is just “dashboards,” keep pushing. The goal is not universal surveillance. The goal is smarter use of human attention and better training data for automation, with just enough math and common sense.

Ian Jacobs is vice president and lead analyst at Opus Research.

CRM Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues

Related Articles

If the Fast Lane Is Self-Service Bots, Then the Slow Lane Is Stewardship

Will customer support in which human agents take ownership of issues become a luxury good?

Quiet CX Upgrades You’re Skipping While Chasing AI Fireworks

There are simple fixes that are surprisingly effective.

Watermelon Metrics and Other BPO Pitfalls

Don't let short-term savings chip away at the experiences that keep customers loyal.