The new SpeechTools application puts complex analytical tackle in the hands of nonspecialist users, speeding deployment and reducing cost.
Posted Aug 7, 2006
Enterprise speech analytics vendor CallMiner today released SpeechTools, its latest application for the CallMiner Analytics Suite. SpeechTools democratizes speech analytics by giving the necessary fine-tuning to end users rather than linguistic software developers. According to the company, this can cut the deployment time for speech analytics dramatically.
In the past specialists were needed to customize the speech application to make it useful, tweaking a variety of factors like dialect, acoustics, and even accents. "An application requires specific tuning if it's to be used in a packed, noisy contact center, and different tuning for quieter environments. Adding new languages outside the norm of Spanish or American and British English required a speech scientist who is also familiar with the analytics engine," says Cliff LaCoursiere, cofounder and senior vice president of business development for CallMiner.
LaCoursiere says SpeechTools empowers end users to add new words, languages or account for the unique acoustic environments of different recording platforms. "SpeechTools takes a number of disparate tools and puts the ability to do this into a regular user's hands." He claims the process of adding a new language to sit on top of the analytics engine can take as little as two weeks with SpeechTools, instead of the two to three months necessary with other customizations. "Our customers and partners are no longer dependent on software developers with limited time and resources." According to LaCoursiere, initial interest is greatest with large corporations with the expertise and global presence to necessitate speech analytics, and also with government security professionals who need the ability to monitor communications in foreign languages.
"There's some learning curve to become proficient with the software, but I can say with confidence that nothing is lost [from traditional methods]. Software knobs tune the statistical engine, taking the complexity away from the user," LaCoursiere says. "Certainly you need someone who speaks the language to put it into the recognition engine, but a user can do this with as little as 10 to 20 hours of recordings."
If the product works as advertised, it could lead to a blossoming of speech analytics in a number of fields. Jim Dickie, partner with CSO Insights, says that giving the tools to the users is likely to be a step in the right direction. "There's promise that speech technology could be used in the CRM system, or in transcription. It can be done with today's technology, but not transparently," Dickie says. "There's always some end-user customization required in speech engines, and software like SpeechTools lets the customer do the heavy lifting and take the implementation that last half mile."
As with any expanded capability, Dickie warns that SpeechTools is not a license for companies to add speech analytics to their business processes on a whim. "There's no problem giving tools to users, as long as there's coordination."
The Why Factor in Speech Analytics
VoiceObjects Opens Its Ears Wider to Personalization
Microsoft Talks Pretty One Day
Sponsored By: Jacada, Avaya, Confirmit, inMoment and BoldChat
Sponsored By: Genesys, Avaya, Verint, and Aspect
Sponsored By: Informatica
Sponsored By: Verint®, Confirmit and inContact