Kubernetes vs Nomad: When Each Actually Makes Sense
Kubernetes is the default answer to a question most teams shouldn't be asking yet. Nomad is the answer when you've decided you don't need the full Kubernetes machinery — service mesh, operators, the layered abstractions on top of pods. Nomad runs containers, raw binaries, and Java jars from the same scheduler with one HCL file. Here's the operational comparison.
The Pricing Reality (2026)
Headline price-per-CPU comparisons are misleading. The real total cost of ownership lives in egress fees, control-plane charges, and the operational time you spend gluing together what the provider didn't ship. Below is the honest 2026 pricing breakdown.
| Dimension | Kubernetes | Nomad |
|---|---|---|
| Entry pricing | Lower friction | More predictable |
| Operational load | Higher | Lower |
| Ecosystem depth | Larger | Focused |
| Time-to-first-deploy | Longer | Shorter |
The pricing comparison is workload-dependent. Run a test workload on each for a week and check the actual bill — that's the only honest answer.
When Kubernetes Wins
- You have 50+ services and multiple teams. The abstraction load is justified at this scale.
- You need the ecosystem. Operators, service mesh, GitOps tooling — Kubernetes has the largest ecosystem in the orchestration space.
- You have at least one full-time platform engineer. K8s rewards investment; it punishes neglect.
When Nomad Wins
- You want to schedule containers and raw binaries / Java jars / qemu VMs from one scheduler. Nomad's task drivers handle this natively.
- You're a small team. Single binary, simple HCL, no operator pattern to learn.
- You're already using Consul + Vault. The HashiStack integration is tight.
The Same Job, Both Ways
# Nomad — one HCL file, one binary, one nomad job run
job "web" {
group "web" {
count = 3
task "app" {
driver = "docker"
config { image = "ghcr.io/me/web:latest" ports = ["http"] }
resources { cpu = 500 memory = 512 }
}
network { port "http" { to = 8080 } }
service {
name = "web"
port = "http"
check { type = "http" path = "/healthz" interval = "10s" timeout = "2s" }
}
}
}
# Kubernetes equivalent: Deployment + Service + (probably) Ingress + (probably) HPA
# That's 3-4 manifests, one of which needs an Ingress controller you also have to install.
The Verdict
Under 50 services and a small team: Nomad, every time. The operational tax is meaningfully lower. Above 50 services, multiple teams, complex multi-tenancy, or you need the ecosystem (operators, service mesh, GitOps tooling): Kubernetes. The honest answer for most teams is that they're using Kubernetes prematurely — but switching back is harder than starting on Nomad would have been.
Frequently Asked
Is Nomad production-ready?
Yes. Cloudflare runs Nomad. Roblox runs Nomad. The 'is it production-ready' question was answered five years ago.
Why does everyone use Kubernetes if Nomad is simpler?
Network effects. K8s has the larger ecosystem, the larger talent pool, the larger collection of tools. Nomad is operationally simpler but the gravity of K8s is real, especially in hiring.
Can I run stateful workloads on Nomad?
Yes, but the patterns are less mature than K8s StatefulSets. CSI plugins exist. Most teams running stateful workloads on Nomad use Consul for service discovery and treat the volume layer carefully.
What about Helm charts and operators?
Nomad doesn't have a direct equivalent. The ecosystem is much smaller. If you depend on a complex operator (Cassandra, ScyllaDB, Strimzi for Kafka), K8s is the right answer.
Have a correction or a different field experience? We update these pieces. Honest critique welcome.