The Build-vs-Buy Revolution You're Missing

by Epochal Team

The economics flipped. Most operators haven't noticed yet.

You're about to auto-renew that $75K/year SaaS contract. Same tool. Same features you don't use. Same 12% price increase.

A year ago, that was the obvious move. Building custom tools was expensive. Maintenance was a nightmare. Your team didn't have capacity.

None of that is true anymore.

Three things changed this month

OpenAI shipped "skills" - a system where AI agents learn capabilities from simple Markdown files. No complex integrations. No vendor lock-in. Just folders with instructions that any LLM can read.

Anthropic implemented the same thing independently. We now have a de facto cross-platform standard emerging organically.

Armin Ronacher abandoned MCP entirely. The Sentry founder, whose technical judgment matters, wrote that he's moving to skills—even killing the Sentry MCP integration he'd built.

Why? Skills don't bloat conversation context. They don't invalidate cache. They just work.

Teams started actually building replacements. Martin Alderson documented it: vendors send their annual double-digit price increase, and teams are responding with "could we build this ourselves?"

A year ago that question ended with "no." Now it ends with a working prototype.

Why this matters for your 2026 budget

The "integration tax" for AI capabilities just dropped to near zero.

MCP approach:

  • Declare tools ahead of time in system message
  • Tool definitions must stay static for entire conversation
  • Change them = lose reasoning traces, lose cache, conversation gets expensive
  • Vendor-specific protocol

Skills approach:

  • Short descriptions with links to documentation
  • Agent reads them, uses existing tools (bash, file operations, etc.)
  • No special tokens, no cache invalidation
  • Works across Claude, ChatGPT, Codex CLI

A skill is a Markdown file. You can write one in twenty minutes. No enterprise contracts. No API credentials. No vendor negotiations.

The SaaS reckoning nobody's talking about

Here's the shift: Most SaaS complexity exists to serve multiple customers with conflicting needs.

When you're the only customer, that complexity evaporates.

  • No more hoping the vendor prioritizes your feature requests
  • No more workarounds for their architectural decisions
  • No more paying for 200 features when you use 20

When Claude Code can build you a custom dashboard in an afternoon—one that does exactly what you need and nothing you don't—the value proposition of $50K/year for a bloated tool changes.

The maintenance objection is weakening fast:

Agents lower maintenance costs. Updating dependencies, patching security issues, fixing edge cases—these are exactly where AI coding assistants excel.

When your tool lives behind your existing VPN, you've reduced your attack surface compared to giving third-party vendors access to your data.

What's happening at the infrastructure layer

Thinking Machines launched Tinker with fine-tuning support for Kimi K2 Thinking (trillion-parameter reasoning model). OpenAI API-compatible. Plug into existing workflows without rewriting your stack.

Gemini Robotics published work using Veo as a world model for policy evaluation. Using video generation to simulate robot interactions for testing. Generative models are reliable enough for safety-critical evaluation now.

Immersa shipped as open-source. Tools that used to require expensive licenses are now MIT-licensed projects that agents can customize for your needs.

The oversight problem that's getting worse, not better

You're a Director of Client Operations. Harvey AI pilot is running. Partners approved expansion. Legal signed the MSA.

But the contract says "appropriate oversight"—and nobody has defined what that means when your consultants are using AI to research case law for Fortune 500 clients.

The liability frameworks haven't caught up:

  • Your team can build internal tools faster than ever (skills revolution)
  • You're questioning every renewal (SaaS reckoning)
  • AI is getting more capable every quarter (infrastructure improvements)
  • But governance frameworks are still from 2023

The Rachel scenario—getting fired for signing off on something you didn't fully understand—hasn't gone away. It's gotten more likely, because the surface area of AI deployment is expanding faster than the oversight protocols.

What you need to do

The firms that thrive won't be the ones moving fastest. They'll be the ones building oversight that survives contact with reality.

Understand capabilities at a technical level:

Not marketing level. Technical level. What can your AI tools actually do? What are their failure modes? Where do they hallucinate?

Document validation processes:

In ways that will hold up if something goes wrong. Who reviews what? When? What triggers escalation? If you can't write it down clearly, you don't have a process.

Build internal capabilities:

So you're not entirely dependent on vendors whose incentives don't align with your risk profile. The skills revolution makes this cheaper than it's ever been.

Three things to do this month

1. Audit your SaaS renewals through a build-vs-buy lens

Before auto-renewing that $75K contract, ask: "What would it take to build the 20% of this tool we actually use?"

Get a real estimate from someone who's used Claude Code or Cursor on a similar project.

Factor in maintenance—but factor in agent-assisted maintenance, not 2023-era maintenance. The math has changed.

2. Start a skills library

Create a folder. Write Markdown files documenting how your team uses your most important tools.

Not for the AI—for yourselves.

Then test whether Claude or ChatGPT can follow those instructions.

You're building institutional knowledge that works across platforms and survives vendor changes. This costs almost nothing and compounds over time.

3. Define "appropriate oversight" before someone else defines it for you

If your AI vendor contract includes vague language about oversight or validation, write down what you're actually doing.

Be specific:

  • Who reviews AI outputs?
  • What percentage get checked?
  • What triggers escalation?

If you can't write it down clearly, you don't have a process—you have a hope.

And hope is not a defense in a malpractice suit.

The choice you're making whether you realize it or not

The quiet revolution isn't about any single tool or announcement.

It's about the cumulative effect of many small changes that, together, shift what's possible and what's expected.

The firms that recognize this early will have options.

The ones that don't will have explanations.

Choose options.

More articles

Are MSAs for AI Agents liability laundering?

What AI Agent MSAs get right and wrong.

Read more

AI is Not a Feature, it's a Labor Class

Why treating AI agents like software instead of workers kills 42% of enterprise projects.

Read more

Tell us about your project

Our offices

  • Kraków
    Urzędnicza 38, 4th floor
    30-048, Kraków, Poland