A Backstage Pass to Technology Evaluations
New year, new…set of enterprise initiatives.
It’s hard to be catchy and businessy at the same time. I hope in time you will learn to forgive me. But for real, you probably are chipping through your 2025 plans, and I’ll bet most of you have some tech evaluations on the horizon.
Something that you might not know about your favorite End Notes columnist (me—I’m talking about me): Before joining Forrester Research, I was once on the vendor side. I was a sales engineer, gathering requirements, writing technical specs, and giving killer demos. Now, as an analyst, in addition to doing research and giving client advice, I am on the receiving end of demos, and it is my job to evaluate technology.
I know a thing or two about assessing tech. I’m also more than happy to share my knowledge!
So without further ado…
Christina’s Advice for Evaluating Contact Center Tech (2025 Edition)
It’s actually a little hard to shut me up once I get going, so to save your (and my editors’) sanity, I’m going to narrow it down to three suggestions:
Get beyond the sales demo. Demos don’t always tell the truth. (Gasp—but Christina, doesn’t that mean you lied in demos before?) Yes and no. Let me explain.
It is not always feasible to demonstrate everything live. For example, a conversation analytics vendor likely doesn’t have access to millions of contact center calls, but analytics dashboards look silly without data to populate them. The options are (1) show actual client data (yikes) or (2) use dummy data that has been structured to tell some sort of operationally appropriate story (yay).
The reality, though: Some vendors stretch the truth farther than others, and it’s very, very hard to tell the difference. As part of every evaluative cycle, I would strongly recommend incorporating a hands-on component. Require that vendors follow demos with a hands-on workshop that allows your users to interact with the software in the very same scenario shown in the demo. There will be a learning curve with any new software, and becoming super-experts by the end of the workshop is not the goal. But this approach will allow you to see your vendors’ enablement resources in action, experience the software’s usability and responsiveness firsthand, and cut through the demo pixie dust sprinkled during the sales cycle.
Ditch the semantics. (Oh sure, Christina. Just as every vendor and its dog starts calling me up about agentic whatsits, you want me to sidestep the jargon?)
Hey, if it’s any consolation, they’re calling me about agentic whatsits, too. But I get your point. It’s never been easy to blast past the buzzwords, and it seems new terminology is being invented every other day.
I’m really asking you to evaluate in a way that makes you impervious to the charms of jargon. It doesn’t matter whether the vendor pitches you “agentic AI” or “proactive serendipity processors.” You need to ask what the thing actually does and whether that solves your actual business problem.
Don’t forget about the “boring” stuff. In this brave new world of agentic whateverthehecks and genAI everything, it is so very easy to take the simple stuff for granted. But I am here to tell you: Boring is beautiful, baby. Or at the very least: Boring is beneficial.
A lot of my clients have been pushing internal teams to build tech vs. buying it. Now I’m not saying I’m on Team Buy—I’ve left my vendor days behind me—but I do hope you’ll hear me out here.
If you are going to consider build and buy options in the same evaluation cycle, please make sure you’re including the boring stuff. Many teams will base their evaluations on something like model accuracy and that’s it. Don’t get me wrong, it’s valuable to have an accurate model, and you do want to be aware of discrepancies in accuracy. But what about the button that lets your agents dispute their automated quality scores? Or automated alerts and triggers for specific events in the software? Few vendors are putting these things at the top of their marketing data sheets, but mundane capabilities and workflows often unlock the value of certain software solutions.
Good luck and happy assessing!
Christina McAllister is senior analyst, Forrester Research, covering customer service and contact center technology, strategy, and operations.