Pending Legislation Could Upend AI in CRM
While privacy issues have dominated CRM-related legislation for several years, 2026 is promising to be a year where artificial intelligence could be the primary target for government regulators.
“AI-specific legislation is coming fast, and most email marketers are unprepared,” says Guy Hanson, vice president of customer engagement at Validity, a CRM data quality, email performance, and campaign analytics provider. “While the U.S. is unlikely to replicate the [European Union’s] comprehensive AI Act anytime soon, individual states are rapidly enacting their own patchwork of AI regulations, creating a compliance minefield for marketers.”
The industry’s biggest need is clarity, according to Hanson. “Clear rules around consent, automation, and data usage would allow CRM platform providers to innovate responsibly while giving businesses confidence that they can scale without unexpected regulatory exposure.”
It’s that patchwork of AI legislation cited by Hanson that prompted President Donald Trump’s administration to issue an executive order, “Ensuring a National Policy Framework for Artificial Intelligence,” in December. The order declares a federal policy to sustain and enhance U.S. AI dominance through a minimally burdensome national framework and announces the administration’s intent to work with Congress on uniform federal standards that would supersede conflicting state laws. Just about every state in the nation has some AI legislation, but there has been little on a national level.
“For marketers using AI to process customer data in sophisticated ways, 2026 will be a reckoning on consent practices. You need to ask yourself: Did your customers consent to having their personal data processed by AI algorithms? Is your legal basis for data processing still valid, given what AI represents?” Hanson says.
Here’s a look at some of the other pending legislation at the federal level dealing with AI and privacy:
The American Artificial Intelligence Leadership and Uniformity Act (HR 5388). Introduced in September by Rep. Michael Baumgartner (R-Wash.), it aims to create a national framework for U.S. AI leadership by establishing federal guidelines, codifying Trump’s executive order, and implementing a five-year moratorium on state-level AI regulations that could hinder interstate commerce.
The AI Accountability Act (HR 1694). Introduced by Rep. Josh Harder (D-Calif.) in February, this bill would require the National Telecommunications and Information Administration to study and report on accountability measures for AI systems used by communications technologies (such as telecommunications networks and social media platforms) and the ways in which AI accountability measures reduce risks related to AI systems (e.g., cybersecurity risks), among other topics. The NTIA would also have to solicit stakeholder feedback and report to Congress on information that should be available to individuals, communities, and businesses that interact with, are affected by, or study AI systems; and methods for making that information available.
The Creating Legal and Ethical AI Recording (CLEAR) Voice Act (HR 334). Introduced by Rep. Rick Allen (R-Ga.), this bill aims to amend the Communications Act of 1934 to explicitly regulate robocalls and prerecorded messages created using generative AI, such as voice cloning. It would require that such messages clearly identify and state the telephone number or address of the individual or entity initiating the call, and that any system making such phone calls release a recipient’s telephone line within five seconds of notification that the recipient has ended the call.
The AI Risk Evaluation Act (S. 2938). Introduced by Sens. Josh Hawley (R-Mo.) and Rick Blumenthal (D-Conn.) in late 2025, this bill aims to establish a federal program within the Department of Energy to test and evaluate advanced AI systems before deployment, requiring developer disclosures (data, architecture) and creating frameworks for future regulation to prevent catastrophic risks, protect national security, and ensure safe AI development through data-driven oversight.
Privacy’s Still an Issue
Nonetheless, for all the legislative and regulatory attention being given to AI, data privacy concerns are not going away, and in many cases, privacy becomes more of a pressing issue as AI dominates CRM conversations.
Data privacy is still a major federal question mark, says David Berg, CEO of CommanderAI, a provider of AI sales intelligence for the waste management industry. “There’s continued discussion around a national privacy framework, but in the absence of a clear standard, CRM providers and their customers are left navigating uncertainty. How contact data is collected, enriched, stored, and reused is central to CRM functionality, and inconsistent expectations create risk for both vendors and operators.”
A central piece of legislation in that area is the Telephone Consumer Protection Act (TCPA), a federal law that has been on the books since 1991. The TCPA restricts telemarketing, robocalls, prerecorded messages, and faxes, protecting consumer privacy by establishing the national Do Not Call registry and requiring consent for calls to mobile phones.
Enforcement of the TCPA and Do Not Call requirements has been weak at best, but the administration has also pledged to make enforcement more stringent.
“The way regulators and courts are interpreting consent, dialing behavior, and automation now applies directly to modern CRM workflows, including AI-assisted dialing, automated follow-ups, and multichannel outreach,” Berg explains. “Many platforms were built for speed and scale before this level of scrutiny existed, and the gap between how teams operate and what’s considered compliant is getting smaller.”
AI, privacy, and other issues also intersect in another piece of federal legislation that was first introduced in the U.S. House of Representatives and Senate in the summer. Called the Keep Call Centers in America Act, it would promote transparency when companies are using AI and when customer queries are being handled by offshore contact centers.
The proposed legislation would apply to companies with at least 50 contact center employees and require them to do the following:
- Report to the Department of Labor if 30 percent or more of an individual contact center’s volume is handled outside of the U.S.
- Disclose their location if asked and transfer the customer to a U.S.-based agent if requested.
- Disclose whether AI is being used for customer service communication and transfer customers to a U.S.-based human agent if requested.
“If you’re calling customer service, chances are your day isn’t going great. On those frustrating days, you should be able to talk to a real human being right here in the U.S.,” said one of the bill’s authors, Sen. Ruben Gallego (D-Ariz.), in a statement. “My bill will encourage companies to keep their call centers in the U.S. and require that they tell you up front if you’re talking to an AI bot, protecting American jobs and making your day easier.”
“All Americans deserve good service. When folks pick up the phone and ask for help, they shouldn’t have to deal with AI robots or be routed to someone across the world. This bill puts American workers first and ensures people can talk to a real person who understands them when they need help,” added co-sponsor Sen. Jim Justice (R-W.Va.) in a statement.
The proposed legislation is already gaining support from organizations like the Communications Workers of America.
“CWA strongly supports the Keep Call Centers in America Act. This much-needed legislation protects U.S. call center jobs and addresses the growing threats posed by artificial intelligence and offshoring,” said CWA director of government affairs, Dan Mauer, in a statement at the time. “Historically, companies have offshored customer service jobs to avoid paying good union wages and benefits. Now companies are using AI to deskill and speed up work and displace jobs, which undermines worker rights and degrades service quality for consumers. Our taxpayer dollars should not be used to reward this race to the bottom. We applaud the introduction of the Keep Call Centers in America Act and urge Congress to pass it without delay.”
Mario Matulich, president of Customer Management Practice (CMP), notes that the Keep Call Centers in America Act “highlights what consumers have been demanding for years: better, faster, and more accessible service.”
But, he cautions, “the solution isn’t to avoid innovation; it’s to mold it into an effective system, responsibly. AI is already here.… We need to be prepared to implement it thoughtfully and enable companies to deliver faster resolutions, reduce customer effort, and designate human agents the time to focus on higher-value and emotional interactions. The CX winners are already seeing success by combining AI with the emotional intelligence of customer contact professionals.
“Protecting jobs and improving CX are not mutually exclusive,” Matulich continues. “When done right, AI is not a threat to the customer experience. It is a vehicle for empowering agents to transition from reading scripts to making human connections. As a community, our focus should be on investing in innovative AI technology that will make self-service more valuable for those who want it, while enabling agents to deliver better, more personalized support for those who need it.
“If the impact of this bill is to demonstrate the enduring importance of human employees in the support process, then it is well-intentioned. But if it discourages technological innovation and prevents us from preparing agents for next-generation work, it will ultimately come at the expense of the customer experience and job security,” he concludes.
Baker Johnson, chief business officer of UJET, shares that sentiment. “The Keep Call Centers in America Act surfaces valid concerns about service quality and workforce stability, but it misdiagnoses the fundamental problem. The issue isn’t where an agent is located or if they’re an AI; it’s the outdated, siloed systems that result in transfers, long wait times, and overall poor customer service.”
While much of the intention of the legislation was to keep companies from offshoring contact center operations and to bring back some of those contact center jobs already offshored, Gartner predicts that rather than bringing contact center jobs back to American shores, companies will use AI to automate those positions.
“Moving agents back to the U.S. will not solve frustrations with IVRs,” Gartner adds. “Poor customer experience with an IVR is due to poor solution design, not whether the IVR is located in the U.S. or offshore.”
And the move to AI won’t satisfy the many consumers who want to be able to interact with a human in their native language, the research firm adds.
Other Legislative Priorities
Also on the federal level are proposed rule changes governing predictive dialing. The Federal Communications Commission’s “Improving Verification and Presentation of Caller Identification Information” proposal would eliminate long-standing limits on call abandonment and minimum ring times. These safeguards have, for two decades, protected consumers from the unwanted side effects of predictive dialing.
If these FCC changes go through, executives at Sytel, a contact center-as-a-service systems vendor, predict a sharp rise in outbound call volume across the United States.
“The FCC appears to believe that predictive dialers have matured to the point that regulation is no longer needed. That assumption doesn’t reflect reality. Many dialers still chase performance through aggressive, poorly designed pacing, which inevitably creates lots of nuisance calls,” says Sytel CEO Michael McKinlay.
“Yes, we can expect an initial flood of nuisance calls,” McKinlay says. “But consumers today are far more savvy; they simply won’t pick up unknown numbers. And carriers are far less tolerant of short-duration calls. Together, these forces will quickly put the brakes on irresponsible dialers.”
McKinlay warns that legitimate businesses will struggle to reach customers unless they adapt quickly: “In this new environment, success with voice outreach depends on strong branding and disciplined, well-engineered pacing. Calls must carry a trusted corporate identity, and dialers must deliver high performance without generating nuisance calls.”
McKinlay concludes: “We hope the FCC reconsiders relaxing all dialer controls. If it doesn’t do so, then outbound markets can be expected to be chaotic in the short term. But with consumers firmly in the driving seat, well-designed dialers will prevail and the future for outbound voice will be brighter, more transparent, and more trusted.”
Executives at 8x8 add that some legislation for specific industries could also affect those in other industries as well. For example, California’s AB578, which went into effect Jan. 1, requires food delivery platforms to issue full cash refunds rather than app credits, and to provide access to a human customer service representative when issues can’t be resolved through automated systems.
“It’s one of the clearest U.S. examples yet of regulators stepping in to curb consumer frustration with rigid, automation-first customer service models,” says Chris Angus, 8x8’s vice president of customer experience and communications platform-as-a-service expansion. “In many ways, it mirrors the customer service laws we’ve seen emerge in Europe, but this time with a tangible, large-scale U.S. precedent now in effect.”
“What’s happening in California isn’t really about food delivery,” adds Dhwani Soni, 8x8’s global vice president of product management and design. “It’s about where we’re drawing the line between automation and accountability in customer service. AI can take a lot of friction out of routine work.”
However, the moments that define trust are the messy ones, when something breaks, when the customer is frustrated, when the context matters, Soni adds. “In those moments, people don’t want to argue with a system. They want a human who can listen, make sense of what’s happening, and own the resolution.”
While it is expected that some form of federal legislation is coming, in the meantime, companies will still need to pay particular attention to state laws.
With that in mind, Hanson recommends that companies provide their customers with explicit opt-in or opt-out options for AI-generated messaging and AI-driven personalization.
“Privacy policies need complete overhauls,” Hanson adds. “The marketers who put themselves in the driver’s seat on this now will avoid the nightmare scenario of retroactive compliance across dozens of campaigns and customer segments. Those who wait will find themselves scrambling when enforcement begins.”
Phillip Britt is a freelance writer based in the Chicago area. He can be reached at spenterprises1@comcast.net.