The burdens of AI compliance, risk management and governance have grown overnight for CIOs, who must wrestle with legal mandates that shift faster than operational guidance. The federal vs. state tug-of-war over conflicting AI laws is widening the gap between what organizations are expected to deploy and what they can confidently defend. The risk is immediate: what ships today may become tomorrow’s liability.
In this new environment, AI governance has upgraded from a routine compliance issue for IT teams into an urgent executive and board-level problem — one whose parts are evolving in real time. It is CIOs who must guide the company’s response in the end. The question is: in what direction?
Wait it out or charge ahead?
When choosing a path forward, it can be helpful to look backward at previous times of uncertainty; constantly changing regulatory landscapes are not new challenges for CIOs. The temptation for many executives is to sit tight until the feds and the state regulators finish battling it out. But that tactic has rarely proven successful.
“We saw this exact pattern in the early days of the GDPR; the organizations that waited for every detail to be settled were years behind those that built adaptable governance frameworks from the start,” said Danie Strachan, senior privacy counsel at VeraSafe, a provider of data protection and privacy compliance services.
Strachan suggested CIOs consider the lessons learned from compliance confusion in previous contentions, such as those over data privacy, cybersecurity and cross-border data transfers. The companies that succeed are “the ones that build a compliance program designed for uncertainty rather than certainty,” he said.
Financial penalties and repercussions
The risks in both the “guess” and “ignore” approaches primarily lie in steep penalties that can accumulate through stacked enforcement actions.
“Unlike the EU, where penalties flow from a single AI framework, in the U.S. the biggest risk isn’t a single AI fine, it’s stacked enforcement. Companies can face state-level penalties, federal enforcement under existing consumer and civil rights laws, and civil litigation, all tied to the same AI system,” said Milos Rusic, co-founder and CEO at deepset, a company that builds tools for custom enterprise AI and natural language processing apps.
Penalty amounts are often sizable, and the financial repercussions can linger into the long term.
“At the federal level, the FTC can impose significant civil penalties, mandate long-term consent decrees and require costly remediation,” warned Pamela Slea, president of Boltive, a provider of a unified platform for ad security and privacy compliance.
“At the state level, privacy laws such as CPRA, VCDPA, CPA and CTDPA allow attorneys general to seek per-violation fines that can quickly scale into the tens of millions, especially for high-traffic digital properties,” Slea added, referring to the California, Virginia, Colorado and Connecticut privacy statutes.
_(4).png?width=1280&auto=webp&quality=80&disable=upscale)
Beyond the bottom line
But high-stake risks aren’t pinned exclusively to high dollar penalties. Certainly, reputational risk is a top concern, but there are others that CIOs need to factor into their decision-making.
“In practice, the ‘penalty’ splits into three buckets: enforcement risk, commercial risk, and government-contract risk,” said Ensar Seker, CISO at SOCRadar, a provider of extended threat intelligence and real-time cyber threat protection.
Seker said that on the state side, many AI and privacy-adjacent laws are enforced by state attorneys general and regulators through a battery of actions including civil penalties, injunctions/stop-use orders and mandatory remediation. In some cases, it also includes private litigation exposure if the law creates a private right of action or plaintiffs can plead deception, discrimination or unfair practices.
“That’s where the real cost shows up: legal fees, forced model/feature rollback and class-action style discovery that becomes a forensic audit of your data and decisioning,” he said.
The federal side can be just as concerning, in terms of the requirements that already reside and will persist there. For example, Seker said that for most CIOs today, the bigger risk is procurement and funding leverage rather than a single AI compliance fine.
“If you’re selling into government or operating in federally funded environments, noncompliance can mean losing eligibility, failing audits, getting terminated for default, suspension/debarment pathways and certification liability if you represented controls you didn’t actually have,” he said.
The long reach of enforcement can extend even further. “Even when the federal posture is preemption, the enforcement reality is still: show your governance, show your controls, prove you can operate safely,” Seker said.
Walking the regulation tightrope
Against this tangled and risk-laden backdrop, CIOs must find a way forward that keeps the company competitive, yet free of legal entanglements. That’s no easy task.
“There’s no way to comply with federal [AI] mandates [overriding state regs] right now because they don’t exist yet; the executive order just created a vacuum. The real penalty for companies is operational paralysis,” said Aimee Cardwell, former CIO and CISO at UnitedHealth Group and now CIO/CISO in Residence at Transcend, a privacy company that manages consumer data permissions across a company’s tech stack.
Cardwell warned against trying to pick winners between state and federal requirements right now. “Instead, take a ‘lowest common denominator’ approach and build your infrastructure to handle the strictest requirements you might face. That approach gives you portability across jurisdictions,” she said.
In other words, mine the state requirements to capture the strictest terms first. Build your compliance actions from there, being careful to add further compliance actions to fit unique jurisdiction requirements elsewhere so nothing is overlooked.
Preparing for a worst-case scenario need not be viewed as a fruitless, overcautious approach. Changes in regulation don’t mitigate the underlying need for AI governance, said Brett Tarr, head of privacy and AI governance at OneTrust, an AI-ready governance platform provider; it “just shifts how and why we deliver governance controls.”
Government-led controls are not the ceiling for AI regulation, but rather the floor. Customers worldwide, not just in the U.S., expect businesses to take care of their data, and statistics show that customers abandon brands that break that trust. The result is that “even if the regulatory environment shifts to fewer controls, market conditions demand that companies pick up the slack if regulations recede, and responsibility for AI governance simply shifts from compliance requirement to a business imperative,” Tarr added.
Ultimately, CIOs who hold their focus on the source of most AI risk will do well.
“AI risk doesn’t live at the tool level, it lives in workflows. Blocking tools or waiting for [regulatory] certainty just pushes AI usage underground, where risk and noncompliance multiply,” said Rajesh Raman, CTO of Lanai, an enterprise AI platform.
Enjoyed this article? Sign up for our newsletter to receive regular insights and stay connected.

