-->

Tips to Avoid Drowning in Data

Article Featured Image

To ensure data is accurate, companies should find a benchmark or “truth set” against which any new data can be compared, recommends Matt Habiger, chief data scientist at marketing solutions provider TruFactor. Anomaly detection will alert the user to unexpected changes or data that lies outside of established norms.

“You should always be checking for unexpected patterns,” Habiger says. “We spend a lot of time building our models so that we have good, accurate records.”

Another way to increase confidence in data is to rely on multiple sources, RollWorks’ Bordoli says. “There is no single source of the truth. A single data source is wrong 30 percent of the time. We ingest multiple data sources. You need to understand the level of agreement or disagreement between data sources.”

If three disparate data sources agree on information, a user can have a 97 percent accuracy rate, Bordoli explains. “Normalization and agreement give you confidence in the data.”

With systems that haven’t undergone that level of scrutiny, the biggest problem is that employees will not use them or will not trust their output, Krishen points out.

The right personnel also make a world of difference. To turn data into actionable intelligence, companies must have dedicated data science teams, Bordoli says. “That doesn’t mean someone who is working on it as part of his or her job, but a team that has sole responsibility for the data science.”

Bordoli adds that data science isn’t just technology; it’s technology and humans working together.

Data scientists know how to lay out data structures in such a way that a single query can produce multiple answers, Habiger adds. “The idea is to process once for many uses.”

Bergh adds that improperly constructed data queries and incorrect uses of data lead to poor results or late results, when companies have to reanalyze data after the initial query failed.

“You need to automatically test files and take a careful look at the results,” Bergh says. “This operational stuff is important. Focus your team on lowering the error rate.”

Artificial intelligence, especially machine learning, has a huge role to play here. With machine learning, systems get smarter the more they are used and can automatically refine results as needs and preferences dictate.

Technology is a definite necessity if companies have any hope of processing the vast amount of data available, but systems still need to be designed correctly and tuned often to produce the right results.

Habiger adds that companies need to work closely with their data scientists to determine just how granular they want to be with their data collection and analysis. In some cases, more granularity can produce more valuable insights, but at other times it just produces additional noise with little to no real value.

“You need to determine if you want to spend that extra effort,” Habiger says.

With experts predicting even more extensive data streams, smart decisioning and data modeling to uncover actionable intelligence will become increasingly necessary, Krishen states.

Software today can make an untold number of calculations, predictions, and recommendations, but the decision to accept or reject the software’s conclusions still lies in the hands of people, Honig says. The software might suggest contacting a purchasing manager to make a sale, for example, but the salesperson might know from experience that there is another person at the target company who actually drives the purchasing decision. As long as that contact information is fed back into the software, machine learning should be able to update future recommendations to reflect the proper contact. In the end, while analytics tools are critical, human beings are still the central figure in all of this. 

Phillip Britt is a freelance writer based in the Chicago area. He can be reached at spenterprises@wowway.com.

CRM Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues