-->

Tips for Battling Bias in AI-Based Personalization

Article Featured Image

Artificial intelligence (AI) is becoming much more common as an embedded feature in a variety of customer service, marketing, and sales technologies. And the technology’s use is only expected to grow. In fact, Pricewaterhouse Coopers projects that AI will contribute $15.7 trillion to the global economy by 2030; business leaders at IBM anticipate adoption of AI in the corporate world will explode up to 90 percent in the next 18 to 24 months.

The reasons are obvious: AI systems automate answers for the most common and easiest queries, such as “What is my account balance,” leaving human agents free to handle more complex and demanding customer service issues.

These systems are designed to accept and understand customer queries and perform other functions, such as providing a direct answer, asking a clarifying question, pushing the interaction to a live agent, or completing an order. Beyond that, they can also personalize responses, recommend products, provide targeted offers, guide agents and sales reps toward next best actions, and more, guided by previous interactions and purchases, current trends, or predictions about the future.

But in preparing and delivering to customers and employees, does bias creep into AI’s insights?

“This is an important question, because we are making increasingly critical decisions in AI,” says Sagie Davidovich, cofounder and CEO of SparkBeyond, which provides an AI-powered business problem-solving platform. “You want to have accountability, transparency, and inclusiveness.”

Peter van der Putten, director of decisioning and AI solutions at Pegasystems, agrees.

“When people talk about AI, both the proponents and the detractors like to mystify it as a silver bullet or something that is inherently evil,” he says. “Bias can be a real problem. A lot of AI is being used to make all kinds of automated decisions. It’s important that we keep it trustworthy and fair, transparent, and responsible.”

And for years, the answer to the bias question has been a resounding yes.

Automated systems can perpetuate harmful biases, according to Alex Miller and Kartik Hosanagar. In their research, they found that dynamic pricing and targeted discounts could be subject to biases if left to an algorithm.

“Even the largest of tech companies and algorithmic experts have found it challenging to deliver highly personalized services while avoiding discrimination,” they said.

“Bias can creep in from the data that we use and select to train the models,” van der Putten admits.

Left undetected, bias in AI can lead to harmful discriminatory practices, such as fewer loans, insurance policies, or product discounts offered to underserved populations. Business consequences can range from skewed or inaccurate campaign results to regulatory violations to the loss of public trust.

Companies in industries with strong anti-discriminatory laws, such as financial services or insurance, have to be particularly careful when it comes to AI-based offers, cautions Peter Cassat, a partner at Culhane Meadows, a New York law firm. “You need to make sure any of your processes are applied equally and don’t have a discriminatory impact, and you do so in compliance with all fair lending laws or fair housing laws. Just because you may be substituting technology to make some of those decisions for you in your CRM, your call centers, you have to make sure that they’re not having a discriminatory impact.”

The discriminatory impact can arise quickly and inadvertently. Nowhere was this more evident than with the ill-fated launch of the Microsoft Twitter bot Tay in 2016. Microsoft described Tay at the time as an experiment in conversational understanding. The more one chatted with Tay, the smarter it would become, learning to engage better through casual and playful conversation, or at least that was the goal.

But very shortly after Tay launched, people starting tweeting at the bot with all sorts of misogynistic and racist remarks. When Tay started repeating those phrases back to users, Microsoft quickly pulled the plug on the technology.

A HISTORICAL PERSPECTIVE ON AI BIAS

Long before AI became embedded in the vernacular of the technology industry, automated technology was blamed for bias in the lending industry.

Lenders quickly adopted AI to speed the loans process, assuming that by automating loan approval or denial decisions they would eliminate any potential bias (even if unintended) by a lender’s human employees.

It’s a common belief still shared by many today. A study by WP Engine, a WordPress digital experience platform and hosting provider, found recently that 92 percent of workers believe AI provides an opportunity to examine both organizational and data-related bias, and 53 percent believe AI is an opportunity to overcome human bias.

But the study also found that nearly half (45 percent) thought that bias in AI could cause an unequal representation of minorities, meaning it would succumb to some of the same issues as earlier technology.

In the lending industry’s case, early technologies were designed to follow a detailed series of steps and act based only on clearly defined data and variables, which were limited by the technology of the time (pre-Big Data).

But bias crept into the process because the algorithms involved used data and variables that tended to be discriminatory themselves, leading to loans declined for many African-American and Hispanic mortgage applicants.

Several experts explained that the biases in the algorithms come from the developers who build them. Even if unintentional, these developers have their own inherent biases.

“While the data that marketers use to segment their customers are not inherently demographic [and therefore biased], these variables are often correlated with social characteristics,” Miller and Hosanagar said.

“Simply put, AI models are biased because the data that feeds them is biased,” says Chris Doty, content marketing manager at Rapid Miner, a data science software provider. “For example, if you have a speech recognition system trained only on speakers of American English, it’s going to struggle with anyone else—people from Australia, speakers of non-standard varieties of American English, non-native speakers, etc.”

Similarly, if the customer profiles used to train an algorithm contain attributes that correlate with demographic characteristics, the algorithm is highly likely to make different recommendations for different groups, according to Miller and Hosanagar.

Doty also notes that bias can come into play over time with models “drifting”—basically, this means that the real world is changing, but the model isn’t being updated with data that reflects the current state of affairs, so it starts to poorly match the real world.

“Ultimately, models are only as good as the data that is used to train them,” Doty states. “Ensuring you have a wide range of relevant, up-to-date data is key, especially when it comes to speech projects.”

MINIMIZING BIAS: MORE DIVERSITY, MORE TESTING

Easier said than done, of course. The natural truth is that most developers have similar backgrounds, so their biases—intended or not—tend to be reflected in any system they build, says Timothy Summers, CEO of Summers & Co., an organizational design and cyber strategy consulting firm. That’s why facial recognition, speech, and other AI-based systems will fail more often as they are used with more diverse populations.

Cassat agrees that automated systems tend to perpetuate the status quo because the data used to drive decisions is historical data that might have been gathered using ZIP codes or some other means with inherent bias.

“One serious problem is that of expectation, of what AI can really do. At the end of the day, an AI system is educated and trained to solve a particular problem, and that is pretty much its entire universe,” says A.J. Abdallat, CEO of Beyond Limits, an AI company. “These systems are not humans who can freely interact with their environment, go to libraries, call up experts on the phone, perform experiments, and test hypotheses.

“They have a myopic view of the world. They are machines, not people,” he adds.

Even so, most experts lament that eliminating bias in AI likely can never be fully realized. It can, however, be minimized by having a diverse array of developers involved in the systems from the beginning, they say.

The more diverse the developers, the more diverse the AI they will develop, Summers says. “You need to embrace a more diverse approach for developing these systems. You need to innovate responsibly. You need to co-create with the community that you are trying to serve.”

The best way to eliminate as much bias as possible in the AI of systems designed to make personalized offers is to include people from as many different backgrounds as possible in the development, experts agreed.

And it also helps to have humans and automated systems working together, according to Jonathan Moran, product marketing manager for customer intelligence at SAS.

AI personalization algorithms, he says, “have been created by a human, likely a data scientist. They are then executed by a machine that, without logical understanding, may misinterpret and inadvertently deploy personalization incorrectly. Implementing code review between programmers and then explaining the algorithm to downstream marketers allows deficiencies to be identified and resolved.”

Miller and Hosanagar also recommend formal oversight of a company’s internal systems, cautioning that AI audits can be complicated, involving assessments of fairness, accuracy, and interoperability.

“The first step is to actually understand and detect any bias in your predictive models,” Pegasystems’ van der Putten says. “You also need to continuously track what systems are producing. They might start bias-free, but bias can evolve [such as with Microsoft’s Tay].”

Moran agrees, stressing continued testing of AI algorithms.

“Test against samples as well as whole datasets where possible. Running control groups through personalization initiatives where AI is employed allows organizations to see how personalization results are being delivered and how much bias exists,” he says. “Additionally, removing data variables that organizations believe contribute to biases from models, retraining the models, and then measuring the difference in predictions (which inform personalization) will help organizations understand what variables do and do not contribute to bias. Removing this data from consideration helps remove bias in AI-based personalization.”

Another solution to help limit bias in AI-based systems is to develop models that require less training for new languages while still being able to quickly adapt to the users’ language, according to Vikrant Tomar, founder and chief technology officer of Fluent.ai, a speech understanding and voice user interface software provider. “At Fluent.ai, this has been a fundamental motivating factor behind the development of our end-to-end spoken language understanding (SLU) systems. Our SLU system can learn any language or mix of languages quickly and also allows the end user to adapt the model with very little feedback directly on the device,” he says.

Pegasystems has also actively worked to address the issue of bias in AI design. Last year, it launched Ethical Bias Check as part of the Pega Customer Decision Hub. According to the company, the feature simulates AI-driven customer engagement strategies before they go live, flagging any that are thought to be biased.

The company claims to be the first to offer always-on bias detection across all customer engagements on all channels, from a marketing offer delivered on the web, a promotion placed in an email, or a customer service recommendation made via a chatbot.

Pega’s feature is designed to detect unwanted discrimination by using predictive analytics to simulate the likely outcomes of a strategy.

After setting their testing thresholds, clients receive alerts when the bias risk reaches unacceptable levels, like when the audience for a particular offer might skew heavily in favor of or away from specific demographics.

“As AI is being embedded in almost every aspect of customer engagement, certain high-profile incidents have made businesses increasingly aware of the risk of unintentional bias and its painful effect on customers,” says Rob Walker, Pegasystems’ vice president of decisioning and analytics.

How well Pegasystems’ bias mitigation technology, or any other similar technology, works remains to be seen. But most agree that combatting bias in AI-based systems will be an issue that will continue to plague the CRM industry well into the foreseeable future. 

Phillip Britt is a freelance writer based in the Chicago area. He can be reached at spenterprises@wowway.com.

CRM Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues