The Accountability Crisis Nobody's Talking About

While headlines obsess over copyright lawsuits and workforce cuts, a far more insidious problem is unfolding quietly in enterprise boardrooms: AI agents are making autonomous decisions in HR, finance, and supply chain management—and absolutely nobody knows who's liable when they fail.

[1] Gartner has predicted that by mid-2026, new categories of unlawful AI-informed decision-making will generate more than $10 billion in remediation costs across global AI vendors and enterprises that leverage AI.

This week, The Register published a damning investigation revealing that the world's largest enterprise software vendors—Oracle, Salesforce, ServiceNow, Workday—have simply refused to answer basic questions about liability. [1] Microsoft and SAP refused to comment. Workday, Salesforce, ServiceNow, and Oracle have not responded.

Their silence speaks volumes.

The Real Problem: Hallucinations in Production

The stakes are concrete and terrifying. [1] LLM hallucinations in performance summaries, incorrect regulatory filings, and critical supplies failing to turn up are among the risks weighing on businesses that hand decision-making to AI.

Imagine an AI agent hallucinating in a performance review—generating false accusations of underperformance—that ripples through compensation decisions and wrongful termination lawsuits. Or an autonomous system filing incorrect tax returns. Or a supply chain agent misallocating inventory, costing millions.

Who gets sued? The vendor who sold the AI? The company that deployed it? Both? Neither, because the AI agent was "operating autonomously"?

The Vendor Shell Game

Malcolm Dowden, senior technology lawyer at Pinsent Masons, laid out the problem precisely: [1] "'There's a historic assumption that the vendor will be picking up liability if the thing is going to go wrong. That's the point of origin for more or less all of these discussions.'"

But the vendors are abandoning that historical assumption. [1] Lydia Clougherty Jones, Gartner VP analyst, said decision-making by AI agents may take AI liability to a new level. "When AI agents... are considered to operate on behalf of an organization, decision-making risk becomes ambiguous and unpredictable. It also signals AI risk redistribution with unknown parameters," she said.

Translation: companies will eat the losses.

The Scale of the Deployment

The problem is becoming massive. [1] The largest enterprise application providers are now talking about using AI agents to automate decisions in HR, finance, and supply chain management. Thousands of companies are already piloting these systems, often without clear contractual terms defining liability.

When contracts are silent on AI agent failures, courts will likely default to existing commercial law doctrines—likely leaving businesses holding the bag, with no realistic path to vendor recovery.

What's Missing: Transparency and Risk Allocation

The investigation revealed that [1] holding them liable for the output will remain a challenge until the law is clearer, and cases have gone through the courts. We're headed for a tsunami of litigation that will settle fundamental questions years from now—long after billions are lost.

Meanwhile, Gartner's $10 billion forecast is probably conservative. Early settlements in AI-related employment and discrimination cases could easily exceed that figure.

What Needs to Happen Now

For enterprises: Stop deploying AI agents for consequential decisions without explicit liability indemnification in contracts. Get legal review. Document everything.

For vendors: Provide clarity. Specify what you will and won't cover. Your silence is liability risk masquerading as flexibility.

For regulators: The EU AI Act and emerging state laws don't adequately address autonomous agent liability. Congress needs to act now—not after the first $100 million class action.

The irony is bitter: we obsess over AI existential risk while the immediate, quantifiable risk—flawed autonomous decision-making in actual businesses—remains structurally uninsured and undefined. That's not innovation. That's negligence.


Sources & References

[1] https://www.theregister.com/2026/04/05/ai_agents_liability/