Most Kubernetes security conversations start and end with configuration. Is the API server exposed? Are pod security policies in place? Did someone leave a NodePort open to the internet? Valid questions. But in our experience working on kernel-level telemetry systems, configuration posture is only half the picture, and in practice it's the less dangerous half. The runtime is where attackers actually operate.
MITRE ATT&CK for Containers gives us a shared language for the runtime half. Released as a dedicated matrix in 2021 and updated since, it maps attacker behaviors specific to containerized workloads, including Kubernetes-native techniques that have no equivalent in the enterprise ATT&CK matrix. We use it as the classification backbone for every Kubesentry alert. Here's how the key tactics map to real Kubernetes attack patterns, and what detection coverage actually looks like per tactic.
Why the Container Matrix Is Different from Enterprise ATT&CK
The enterprise ATT&CK matrix was built around endpoint and network telemetry: process trees on Windows hosts, lateral movement via SMB, credential theft from LSASS. That mental model doesn't map cleanly to containers. A container doesn't have LSASS. Lateral movement via RBAC service account escalation looks nothing like pass-the-hash.
The Container matrix adds techniques like Deploy Container (T1610), Container and Resource Discovery (T1613), and Escape to Host (T1611). These are Kubernetes-native attack primitives. Any detection stack that doesn't have explicit coverage for them is working with a blind spot, whether the team knows it or not.
We've found that most mid-size SaaS teams who've done some ATT&CK mapping have done it for their endpoint layer. Almost none have applied it systematically to their container runtime. That's the gap this matrix is meant to close.
Tactic-by-Tactic: Execution
Execution in containers often means running code somewhere it doesn't belong. The most common technique we see: kubectl exec into a running pod (T1609 — Container Administration Command). It's a legitimate debugging tool. It's also how an attacker with API server access spawns a shell inside a production workload.
Detection signal: an exec event that doesn't appear in the workload's behavioral baseline. Kubesentry collects this via eBPF at the kernel level, specifically the execve syscall chain originating from a container process that was spawned outside the normal container entrypoint. The key is baseline context. An exec that looks exactly like your CI deployment pipeline validates fine. An exec that looks like a shell spawned at 2 AM in a namespace that has never run interactive processes before is a different story entirely.
We alert on these within 90 seconds of detection. High-severity, paged to PagerDuty. Every time.
Persistence
Persistence in Kubernetes usually means deploying something. Specifically: getting a malicious workload scheduled by the cluster itself. T1053.007 — Scheduled Job (Kubernetes CronJob) is one of the cleaner techniques because CronJobs are rarely audited post-deployment. An attacker who has compromised a high-privilege service account can create a CronJob that runs a cryptominer or a reverse shell on a schedule. It survives pod restarts. It survives node replacements. It keeps running until someone looks at the CronJob list.
Our data shows that CronJob-based persistence is consistently underdetected. Cloud provider audit logs will show the resource creation event, but only if someone is monitoring them in real time and has correlated the creating identity to a known-compromised principal. In practice, those alerts don't fire until the post-incident forensics phase, which is too late.
Runtime detection of persistence means watching what newly scheduled containers actually do when they run, not just logging the fact that they were created.
Privilege Escalation: The 4-Minute Window
This is the one that keeps us up at night. Fact: container-escape-to-cluster-admin escalation takes an average of 4-8 minutes in an unmonitored Kubernetes environment. That window exists because the techniques aren't subtle, they're just invisible to the tools most teams are running.
T1611 — Escape to Host is the core technique. An attacker running in a container with elevated Linux capabilities (CAP_SYS_ADMIN, CAP_NET_ADMIN, or a container running as root with hostPID: true) can mount the host filesystem, execute processes in host namespaces, and from there access the kubelet credentials stored on the node. Once you have kubelet creds, cluster-admin is often one API call away.
The eBPF detection signal for escape attempts is specific: mount syscalls targeting /proc/1/root or /host, nsenter-style setns syscalls, or ptrace calls targeting processes outside the container's cgroup. Kubesentry classifies these as Privilege Escalation with MITRE technique T1611, and they fire as critical-severity alerts regardless of the workload's baseline, because there is no legitimate business reason for a containerized web service to be calling setns.
Lateral Movement via Service Account Abuse
Here's the thing about lateral movement in Kubernetes: it doesn't look like network scanning. It looks like API calls.
T1550.001 — Use Alternate Authentication Material applied to Kubernetes means using a service account token to make API calls that its owner workload has no business making. An attacker who exfiltrates a token from a compromised pod and then uses it to list Secrets in a different namespace, or to patch a Deployment in a different team's project, leaves a trace in the API server audit log. But that trace only becomes a detection if someone has correlated the token's identity with its historical API access pattern.
Our approach: we cross-reference the Kubernetes API audit log with the pod's declared RBAC RoleBindings and its historical API access fingerprint. When a web-tier pod's service account suddenly makes a list secrets call in the payments namespace when it has only ever read ConfigMaps in its own namespace, that's a Lateral Movement alert. The technique tag ships with the alert, so teams using Datadog or Splunk can build tactic-level correlation rules without a separate enrichment step.
In our tracking, service account abuse events are detected an average of 11 days late when teams rely on cloud-provider billing anomalies or quarterly RBAC audits as their primary signal. Real-time behavioral correlation closes that gap to under 2 minutes.
Credential Access
Two main vectors in Kubernetes. First: stealing secrets directly. T1552.007 — Container API (accessing the Kubernetes Secrets API or environment variables injected at pod startup). Second: credential-dumping via access to the host filesystem after an escape.
The first vector is detectable at the eBPF layer by watching for open syscalls against known secret-injection paths (/var/run/secrets/kubernetes.io/serviceaccount/token, /etc/secrets/, environment variable reads that access /proc/self/environ). When a container reads its own service account token as part of normal API operations, that's baseline. When it reads the token and then immediately initiates a new outbound network connection to an external IP outside its normal egress pattern, that's Credential Access followed by data Exfiltration. The combination, not either event alone, is the signal.
Honestly, this is where the tactic classification earns its value in SOC workflows. A single anomalous outbound connection might be a misconfigured DNS resolver. A Credential Access event followed immediately by a new Exfiltration-tagged network event is a kill chain in progress. Kubesentry enriches each alert with the MITRE tactic at detection time, so the analyst seeing the Datadog alert gets that context without needing to reconstruct the sequence manually.
Building Coverage Maps with ATT&CK
The practical value of the Container matrix isn't just naming things. It's that you can build a coverage map: which tactics do you have detection coverage for, and which are blind spots?
For most mid-size SaaS teams running on EKS, GKE, or AKS, the honest answer is: CSPM tools cover some of the Initial Access and Defense Evasion tactics (misconfigured cluster endpoints, overly permissive pod security). Runtime detection covers Execution, Persistence, Privilege Escalation, Lateral Movement, Credential Access, and Exfiltration, but only if you have eBPF telemetry or equivalent syscall visibility. Most teams don't. That's the coverage gap.
We've been building Kubesentry specifically for the teams that sit in that gap: 15-150 engineers, 1-3 DevSecOps staff, real production workloads, no budget for a 10-person internal security engineering team. The goal is full Container matrix tactic coverage from a single DaemonSet deployment, with every alert carrying its tactic tag so your Splunk or Datadog dashboards can report coverage against the framework without manual enrichment.
If you're running Kubernetes in production and you don't have visibility into the runtime half of the ATT&CK matrix, that's the gap to close first. Configuration posture matters. Runtime behavior is what attackers actually exploit.