LISTEN TO THIS ARTICLE

A job applicant named Derek Mobley applied to over 100 positions through employers using Workday's AI-powered screening tools. He was rejected every time. In July 2024, a federal court ruled that Workday could face direct liability as an "agent" of those employers, not just a neutral software vendor. The court's reasoning was blunt: the AI wasn't "simply implementing in a rote way the criteria that employers set forth" but "participating in the decision-making process." That single ruling cracked open a question the entire industry has been dodging. When an AI agent makes a decision that harms someone, who actually pays?

The Liability Vacuum

Right now, nobody knows. And the gap between AI agent deployment and accountability frameworks is widening fast. Microsoft reported in February 2026 that 80% of Fortune 500 companies are running active AI agents. These aren't chatbots. They're systems that plan, execute, and interact with other agents across claims processing, hiring, underwriting, and compliance workflows.

The legal infrastructure hasn't caught up. Noam Kolt's 2025 paper "Governing AI Agents" frames the problem through agency law: traditional governance tools like incentive design, monitoring, and enforcement break down when agents make "uninterpretable decisions" at speeds and scales no prior governance system was designed for. The information asymmetry between a company deploying an agent and the agent's actual behavior is massive, and existing law was built for human actors who can be questioned, fired, or jailed.

The EU tried to address this. The AI Liability Directive was supposed to create a framework for attributing harm caused by AI systems. In February 2025, the European Commission withdrew that proposal. They've since signaled they'll revive it, but the timing tells you everything: the hardest regulatory questions keep getting deferred while deployment accelerates.

Quote

The Principal-Agent Problem, Translated

Economists have studied delegation risk for decades. When you hire a contractor, you accept that their interests might not perfectly align with yours. You manage that through contracts, oversight, and the threat of consequences. Gabison and Xian's 2025 paper on LLM agentic liability identifies why this breaks down for AI agents: an LLM agent "cannot satisfy all criteria of a normal agent in principal-agent theory." It can't be held to a contract. It has no skin in the game. The misalignment between what you told it to do and what it actually does creates what they call an excess of unpredictable actions, with no clear legal subject to absorb responsibility.

Mukherjee and Chang, writing in a 2025 analysis, coined a useful term for what happens next: the "moral crumple zone," where accountability gets diffused across developers, deployers, and end-users until nobody owns the failure. The developer says the deployer misconfigured it. The deployer says the developer's model hallucinated. The user says they trusted the system's recommendation. Everyone points elsewhere. The harmed party has nowhere to go.

This isn't theoretical. In multi-agent systems where agents coordinate autonomously, the attribution problem compounds. If Agent A passes bad data to Agent B, which triggers Agent C to execute a harmful action, tracing liability through that chain requires interpretability infrastructure that most production systems simply don't have.

Quote

What Organizations Should Do Before Courts Decide for Them

The NIST AI Risk Management Framework offers voluntary governance guidance, and its GOVERN function specifically addresses accountability mechanisms and inter-agent dependencies. But voluntary frameworks don't protect you when a regulator comes knocking. The Mobley v. Workday ruling showed that courts will apply existing discrimination law to AI vendors. They won't wait for bespoke AI legislation.

Chaffer et al.'s ETHOS framework proposes mandatory insurance for AI agents, modeled on how we handle autonomous vehicles. That's directionally right. If your agent can cause harm at scale, the financial accountability should be priced in before deployment, not litigated after the fact.

Organizations deploying agents today should be building what the 2026 AI Safety Report calls for: audit trails with activity logging, clear authority boundaries, and intervention mechanisms that don't require pulling the plug on the entire system. Guardrails aren't optional when your agent can autonomously approve loans, reject applicants, or trigger actions across business functions.

The uncomfortable reality is that courts and regulators will define AI agent liability retroactively, through lawsuits and enforcement actions, not through clean legislative frameworks. Companies that treat governance as a compliance checkbox will discover the hard way that the accountability gap has their name on it.

Sources

Research Papers:

Legal & Industry:

Related Swarm Signal Coverage: