Are MSAs for AI Agents liability laundering?

by Epochal Team

The lawsuits are already here

February 2025: Federal judge sanctions lawyers for submitting fake AI-generated cases in a Walmart lawsuit. The AI hallucinated court precedents. They cited them anyway. Now they're paying fines and facing malpractice claims.

Mobley v. Workday: Derek Mobley never got past Workday's AI screening system. Claimed age and disability discrimination. The court let claims proceed—not just against the employers who used Workday, but against Workday itself.

Workday's defense? "We're just software."

The court's response? "No, you're an agent acting on behalf of employers."

Here's what this means for you:

You signed a contract for an AI research tool. Your consultant uses it for a client deliverable. It hallucinates a case citation. Your client presents it to a regulator.

Who pays the malpractice claim?

Your vendor's MSA says: "Customer is responsible for validating all outputs."

You're holding the bag.

Your current contract probably doesn't protect you

Most AI agent vendors use standard SaaS contracts. Those contracts assume software is predictable—you press a button, you get a result.

AI agents don't work that way.

They plan. They adapt. They interpret your request and take action without asking permission.

The litmus test:

  1. Does it act without someone pressing a button?
  2. Does it remember context across sessions?
  3. Does it trigger external systems on its own?

One "yes" means you're not licensing software. You're contracting labor.

And your SaaS contract wasn't built for that.

There's a new template trying to fix this

GitLaw and Paid.ai just released an AI Agent MSA built on Common Paper. First mainstream attempt to address agentic liability.

What it gets right:

Data rights. Separates input from output data. Lets you opt out of training. Distinguishes "training" from "improvement."

Human oversight. Confirms you keep final decision authority. Requires review checkpoints.

Third-party models. Forces transparency when vendors use OpenAI, Anthropic, etc.

IP clarity. Acknowledges outputs may be similar across customers. Defines ownership.

Regulatory adaptation. Includes renegotiation rights when laws change (EU AI Act, etc.).

This is progress. But it's not enough.

Where the template still leaves you exposed

"Improvement" = training through the back door.

Vendors claim telemetry is just "improvement," not retraining. That's a loophole. Your data could be training their model for your competitors.

"Appropriate oversight" = whatever they say it means.

Your contract says you need "appropriate oversight." Your consultant skips validation once. AI hallucinates. Client sues.

Vendor argues you didn't supervise "appropriately."

What does "appropriate" mean? Hourly checks? Daily? Per-output? Nobody defined it.

Upstream risk lands on you.

Your vendor uses GPT-5. OpenAI deprecates it, pushes GPT-6. GPT-6 is more cautious, slower, less useful. Or plain sycophantic, like it happened before.

You have no contract with OpenAI. You inherit the downgrade anyway.

Model drift has no guardrails.

AI models change behavior over time. The contract doesn't force version pinning, rollback rights, or change notices.

Your Q1 results used Model Version 3.2. Q2 uses 3.5. Outputs change. Your processes break. Vendor shrugs.

Insurance isn't mandatory.

The template doesn't require vendors to carry AI-specific E&O coverage. Mobley v. Workday proved vendors can be held liable. But if they're uninsured, you're still paying when something breaks.

What to actually demand in your contract

Pull out your current AI vendor MSA. Check for these seven things:

1. Data rights

  • Ban cross-customer learning unless you explicitly opt in
  • Define what "improvement" means (operational metrics only, no semantic extraction)
  • Spell out retention and deletion schedules

2. Autonomy scope

  • List what the agent can do without approval
  • Define when it must stop and ask
  • Include kill-switch provisions

3. Oversight requirements

  • Don't accept "appropriate oversight"
  • Specify: validation frequency, approval thresholds, audit log requirements
  • Make vendor provide evidence (logs, trails, dashboards)

4. Version control

  • Pin model versions for critical workflows
  • Require 30-day notice before changes
  • Keep rollback rights for 90 days

5. IP and indemnity

  • Vendor indemnifies for their training data and model weights
  • You indemnify only for your inputs
  • Carve out similar outputs (not your liability if other customers get similar results)

6. Upstream provisions

  • If foundation models degrade, you get credits or termination rights
  • Vendor must disclose model families and deprecation roadmaps
  • SLA penalties for upstream-caused outages

7. Insurance and remedy caps

  • Require Tech E&O and cyber coverage with AI endorsements
  • Minimum coverage: 12-24 months of contract value
  • If they can't get insurance, cap liability and establish escrow for high-risk use cases

Go check your contract right now. Count how many of these seven you actually see.

When vendors push back (and how to respond)

"Our standard contract covers bugs."

AI agents don't bug—they drift. That's the product, not a defect.

Demand: Version pinning, drift SLAs, change control.

"This is too restrictive."

You sold us autonomy. Own the consequences inside that scope. We own misuse. You own product failures.

Demand: Clear autonomy boundaries and dual-track liability.

"Insurance is expensive."

Tech E&O with AI endorsements exists. Courts have held vendors liable (Mobley). Coverage isn't optional anymore.

Demand: COI or cap liability at 12-24 months fees with escrow.

"Model deprecations are inevitable."

Then give us notice, long-term support, and rollback options.

Demand: 90-day notice, 6-month LTS windows, version-pinned deployments for critical workflows.

Do this next week

Monday: Pull your current AI vendor contracts. All of them.

Tuesday: Map which systems are actually "agents" (use the 3-question litmus test above).

Wednesday: Check for the seven non-negotiables. Highlight gaps.

Thursday: Send your redlines to Legal and your vendor. Reference Mobley and the Walmart case.

Friday: If your vendor says "this is standard language," show them this article.

Standard language is what gets you sued.

If you're stuck negotiating or need someone to review your contracts before you scale, we can help.

Sources

More articles

AI is Not a Feature, it's a Labor Class

Why treating AI agents like software instead of workers kills 42% of enterprise projects.

Read more

Tell us about your project

Our offices

  • Kraków
    Urzędnicza 38, 4th floor
    30-048, Kraków, Poland