The Ethics Dilemma of AI for Sales
HAVING recently completed Sales Mastery’s third annual AI-for-Sales Study, the buzz generated around the prospect of artificial intelligence augmenting sales organizations is still increasing rather than abating. Case study examples are emerging that document how AI is already being leveraged to create innovative solutions for optimizing many aspects of customer life cycle management and potentially transforming many more. And while I continue to be impressed by what AI for sales can do, I feel compelled to raise this reality check: The excitement needs to be tempered with a deeper understanding of whether there are things that AI should do.
As AI factors into more and more aspects of commerce, ethical issues that need to be confronted are becoming apparent. Some of these issues can be inadvertent. One example is bias. Science has identified nearly 200 biases that influence human decision making. Biases can be introduced into AI algorithms accidentally by data scientists, or as the result of using data that contains these biases to train the algorithms.
Another ethics issue was something I experienced personally. A U.K. firm that knew about my interest in AI for sales reached out to brief me on its efforts to use AI to perform detailed buyer personality analysis. To demonstrate the scope of its detailed assessment, the firm emailed me an 18-page report about me. Note that the company’s members had never met me, never talked to me, but by analyzing data from my LinkedIn profile, videos of me speaking at conferences shown on YouTube, my Twitter feed, and so on, its analysts were able to create a Big Five (OCEAN) personality assessment, an appraisal of my social media interests, insights into how I stack up against others in my field, etc. While I found this academically intriguing, another part of me saw this as an invasion of privacy.
There are more examples of ethical questions regarding the use of AI emerging every day. And so, as a part of the 2021 AI-for-Sales Study, we asked the survey participants who worked for companies that had implemented AI for sales what their organizations were doing regarding the ethical use of these tools.
The chart above summarizes their responses and shows that only 17 percent have developed a formal policy for the ethical use of AI. So clearly there is a lot of work to be done in this area. Luckily, right at the time that we need them, insights and resources into how to successfully deal with the ethics of AI usage are emerging.
One of these, by Kathy Baxter, principal architect at the Salesforce Ethical AI Practice, is a report titled “AI Ethics Maturity Model”; it presents a concise framework for determining where you are now, where you will need to be, and how to get there. Another resource is The AI Dilemma: A Leadership Guide to Assess Enterprise AI Maturity, written by Cindy Gordon and Malay Upadhyay. Leveraging resources like these to proactively deal with this issue now can save your company a lot of potential future trauma, because once the AI genie is out of the bottle, it is going to be hard to put back in.
Jim Dickie is a research fellow for Sales Mastery, a research firm that specializes in benchmarking case study examples of how companies are leveraging technology to transform sales. He can be reached at firstname.lastname@example.org or on Twitter @jimdickie.