DevOps Ninja logo devops.ninja

Kubernetes Is Overkill for 90% of Teams (Here's What to Use)

By DevOps Ninja Editorial · Published 2026-05-09 · // cornerstone

Kubernetes solves problems most teams don't have at scales most teams won't reach. Here's the lighter-weight stack that ships in a fraction of the operational time.

Kubernetes solves problems most teams don't have at scales most teams will never reach. The fact that this is a contrarian opinion in 2026 says more about hiring incentives than engineering reality.

The Symptoms of K8s-Too-Early

If three or more of those are true, you adopted Kubernetes prematurely. That's fine — the migration off is harder than the migration on, but possible.

What to Use Instead

Tier 1: 1-3 Services, Solo or Tiny Team

Docker Compose on a single VPS, behind Cloudflare. A $20/mo Hetzner CPX21 will handle real production traffic. docker compose up -d, set up watchtower for auto-updates, point Cloudflare at the box. Done. We've shipped startups to $1M ARR on this stack.

Tier 2: 4-15 Services, Small Team

Nomad on 2-3 nodes, with Consul for service discovery. Or — and we mean this — Kamal 2. Kamal turns 'deploy this container to these servers' into a one-liner. No control plane, no orchestrator, just Docker on bare metal with sane defaults.

Tier 3: 15-50 Services, Real Team

ECS Fargate if you're on AWS. DOKS if you're on DigitalOcean. Managed Kubernetes is the lower-friction K8s; you're not managing the control plane. This is where 'managed K8s' starts to pay back the abstraction tax.

Tier 4: 50+ Services, Multi-Team

Kubernetes, with the full suite — operators, GitOps, mesh. The ceiling is high; the floor is low. You need a dedicated platform engineer or two.

The Hiring Argument

'But everyone wants to put K8s on their resume' is real. We hear it constantly. Two responses:

  1. Engineers who refuse to work without Kubernetes regardless of fit are usually engineers you don't want on a small team. They optimize for resume, not outcomes.
  2. Senior engineers we know value variety — running a Nomad cluster, a self-hosted Postgres, and a Cloudflare-Workers edge layer is more interesting than touching K8s YAML for two years.

The Migration Path

Going from K8s back to a simpler stack is uncomfortable but not hard. We've helped two teams do it. Both saved $5-15k/mo in cloud spend and recovered 30%+ of their senior engineering time. The pattern:

  1. Inventory your services. Most teams have 5-10 'real' services and 20+ 'we shouldn't have made this a service' services. Consolidate.
  2. Move stateful workloads off the cluster first. Postgres, Redis, anything with a PVC. Run them on dedicated boxes.
  3. Move stateless services to ECS / Fly.io / Kamal-on-VPS. One service at a time. Behind a load balancer; flip traffic in stages.
  4. Decommission the cluster.

Where We're Wrong

Some teams genuinely benefit from Kubernetes from day one. Multi-tenant SaaS that needs hard pod isolation and namespace-per-customer. Heavily regulated industries that need the PSP/PSA story. ML platforms that need the device plugin ecosystem. If you're in those categories, ignore everything above.

For everyone else: ship more, abstract less, and re-evaluate when you actually have 50 services.

This is part of the DevOps Ninja cornerstone series. Honest critique welcome.