-->
  • October 6, 2023
  • By R "Ray" Wang, founder, chairman, and principal analyst, Constellation Research

The 12 Major Risks of AI

Article Featured Image

In general, the opportunities for artificial intelligence in CRM are abundant. There is so much good ahead. But as with all technologies, the humans behind AI can turn it into a tool to advance humanity or a weapon to harm it.

Despite the widespread optimism around AI, leaders must understand the risks ahead. As part of Constellation’s Digital Safety and Privacy research, 12 major risks of AI have been identified, presented below in order of most likely to least likely, and with impacts ranging from comparatively low to unimaginably high.

1. Inaccurate hallucinations. By definition, generative AI will hallucinate due to limited data and training. These risks require better training models, more training cycles, and dedication to address false positives and false negatives before these systems are put out into the wild.

2. Machine-scale security attacks. Security attacks using AI-powered offensive tools will include vulnerability scanning, vishing, biometric authentication bypass, malware generation, and polymorphic malware (which can change its appearance and signature).

3. Exponential algorithmic biases. Even with large datasets, AI systems can make decisions that are systematically unfair to certain cohorts, groups of people, or scenarios. Some examples include reporting, selection, implicit, and group-attribution bias.

4. Survivorship biases. History is written by the victors. While survivorship bias is another example of algorithmic bias, this risk warrants its own listing. The winners and their promoted narratives will create a bias in the details of a success or failure. Decisions that are based on incomplete or inaccurate information create risks in model formation.

5. Shifting of human jobs to machine-based jobs. Expect massive displacement of jobs as organizations decide among intelligent automation, augmentation of machines with humans, augmentation of humans with machines, or the human touch. Organizations must determine when they want to involve a human and when stakeholders will pay for human judgment and interaction.

6. Privacy and copyright liability. Many models have been trained on publicly available information. However, training on private datasets without permission will create major risks in liability.

7. Reality distortion. AI could create misinformation and disinformation overload. The generation of exponential amounts of misinformation and disinformation could produce a distortion field that will hinder human judgment and obfuscate reality.

8. Disruption of governments. With the persistence of deep fakes, lack of lineage of content, poor means of verification, and few tools to combat disinformation, some futurists and security experts expect the next political upheavals to be instigated by AI tools.

9. AI divides. From social inequities to uneven power dynamics, AI can create a scenario where only those with access to large data troves, massive network interactions, low-cost computing power, and infinite energy will win. This pits large companies against small companies and governments against individuals with no recourse.

10. Loss of human innovation and creativity. An overreliance on AI could result in a dependence that mitigates risks at the expense of innovation and disruption. Society will choose stability and reliability and lose innate human curiosity and appetite for risk.

11. AI overlords and authoritarianism. While the likelihood that AI could take over the world or society today is low, the rapid digitization of the physical world increases the theoretical possibility of an AI war. Large organizations with access to the insights of AI and the means to indirectly and directly influence behaviors will have to be combated by systems that protect individual freedoms and liberties by design.

12. Extinction risks. AI is at least theoretically capable of creating the risk of mass human extinction, and society should be cognizant of and proactive in mitigation measures. Recently, the Center for AI Safety put out a statement signed by more than 350 AI experts, influencers, and policymakers stating that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

THE BOTTOM LINE: CRM AND CX LEADERS SHOULD PLAN FOR THE RISKS AHEAD

Moving from the somewhat fanciful (if nightmarish) to the more mundane, forward-thinking leaders will plan for scenarios where customers own their data. Expect monetary and non-monetary value exchange for customer data to emerge through loyalty programs, beta testing, and permissioned promotions. Companies that value and respect customer data will drive long-term customer value and create new opportunities for mutually agreed-upon monetization. 

R “Ray” Wang is the author of Everybody Wants to Rule the World: Surviving and Thriving in a World of Digital Giants (HarperCollins Leadership) and founder of Constellation Research.

CRM Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues

Related Articles

Achieving Trust in the Age of AI

Companies will learn that without enough precision data, mistrust of AI will persist.