Container escapes are fast. Uncomfortably fast. In our work building runtime detection for Kubernetes environments, we've seen privilege escalation from an initial container foothold to cluster-admin reach happen in under 4 minutes. That's the window you're working with. CSPM tools won't catch it. Cloud-provider audit logs won't surface it until hours later. This guide covers the actual attack paths that matter on EKS and GKE, and what detection looks like at the kernel level.
Why Container Escapes Are Different From Other K8s Threats
Most Kubernetes security tooling is built around configuration state. It checks whether your cluster has privileged: true pods, whether hostPath mounts expose sensitive directories, whether RBAC bindings are overly permissive. That's useful. But configuration scanning answers a different question than runtime detection.
A container escape happens inside a running workload. The attacker is already in a valid container, executing within a container image that your scanner probably cleared. The configuration might look fine. The problem is what happens next, at the system-call level, in real time.
Here's the thing: mid-size teams discover 60-75% of runtime security events during post-incident forensics rather than as the event is happening. That's not a tooling gap you can close by tuning your CSPM policies harder.
The Three Escape Paths That Show Up Most in Practice
1. Privileged Container Escapes
Privileged containers run with full Linux capabilities and direct access to the host kernel. When an attacker gains code execution inside a privileged pod, they can mount the host filesystem, load kernel modules, or use nsenter to move into the host network namespace. The system-call signature is distinctive: you'll see a mount syscall targeting /proc/sysrq-trigger or /etc/cron.d paths, followed by write operations that don't match the container's normal file-write baseline.
On EKS, we've seen this vector used specifically against nodes where the Fargate isolation boundary doesn't apply, targeting EC2-backed node groups where the attacker can reach the underlying instance metadata service after escaping the container namespace.
2. cgroup Namespace Escapes
More subtle. Less obvious on logs. A container configured with a shared cgroup namespace can, under certain kernel versions, use clone with CLONE_NEWCGROUP to probe or manipulate cgroup hierarchies outside its intended scope. The practical impact is typically privilege escalation within the node rather than a full host escape, but that's enough: once an attacker controls cgroup resource allocation, they can starve competing workloads or manipulate memory limits to cause targeted OOM kills.
The detection signal here is a clone syscall with unusual namespace flags from a workload that has no reason to create new namespaces. Simple rule. Rarely fires on legitimate traffic. Worth having.
3. hostPath Mount Exploitation
This one isn't technically an "escape" in the classic sense, but in practice it achieves the same result. A container with a hostPath volume mount to /var/lib/kubelet or /etc/kubernetes/pki can read node certificates and service account token files. On GKE, we've tracked incidents where the attacker used a hostPath mount to read a kubelet client certificate, then authenticated directly to the Kubernetes API server as a node identity.
Detection is straightforward at the kernel level: watch for open or read syscalls against path prefixes that don't match the workload's behavioral baseline. If the pod has never previously read from /etc/kubernetes and suddenly does, that's the signal.
What eBPF Sees That Logs Don't
Kubernetes audit logs capture API-plane events. They show you what the Kubernetes control plane processed. They don't show you what happened inside the container's process tree at the kernel level.
eBPF probes sit at the kernel system-call boundary. Every exec, clone, mount, ptrace, connect, and open call from every process in every container passes through the probe. No sampling. No instrumentation required in the application. The probe attaches to the kernel itself.
In our experience deploying eBPF telemetry on production Kubernetes clusters, the overhead is manageable: under 1.2% CPU and roughly 80MB RAM on a standard c5.xlarge-equivalent node. That's the price of full system-call visibility. Worth it, given the alternative is being blind to the 4-8 minute privilege escalation window.
The Kubesentry DaemonSet deploys one eBPF probe per node using CO-RE probes compatible with Linux kernel 5.8+, meaning no kernel recompilation and no kernel module loading. Events stream to the detection engine within 50 milliseconds of the underlying syscall firing.
Building Detection Rules Around Container Escape Signals
Raw syscall telemetry is noisy. The useful signal comes from correlating syscall sequences against per-workload behavioral baselines. Here's what that looks like in practice.
After a 7-14 day baseline window, you know each Deployment's normal pattern: which syscalls it makes, what network destinations it reaches, what paths it reads. Deviation from that baseline generates an anomaly score. Most escape techniques require syscalls or behaviors that are completely absent from a web application's normal operation.
The key signals to watch for, mapped to MITRE ATT&CK for Containers:
- Privilege Escalation (TA0004):
ptracecalls from a process that has never used ptrace;setuidorsetgidcalls in containers that don't run privileged processes; unexpectedcapsetwith capability bits not in the pod spec - Defense Evasion (TA0005):
unsharewith namespace flags; deletion of audit trail files;prctlcalls modifying process dumpability - Persistence (TA0003): writes to
/etc/cron.d,/etc/init.d, or systemd unit directories via hostPath mounts - Lateral Movement (TA0008): new outbound connections to internal pod CIDR ranges from a workload that normally only talks to external endpoints
- Credential Access (TA0006): reads from
/var/run/secrets/kubernetes.io/serviceaccount/tokenfollowed immediately by calls to the Kubernetes API outside the pod's declared RBAC scope
Each alert carries the full kill-chain context: affected pod name, namespace, node, triggering syscall sequence, MITRE tactic classification, and a recommended response. No pivot to a separate SIEM to get context. It's all in the alert.
EKS vs GKE: Platform Differences That Matter for Detection
Both platforms run standard Kubernetes, but their infrastructure layers create different detection considerations.
On EKS, EC2-backed node groups give an attacker a path to the AWS instance metadata service (IMDS) after a host escape. We've tracked incidents where the escape sequence included an outbound HTTP connection to 169.254.169.254, the IMDS endpoint, to harvest IAM role credentials. That's a distinctive network signal: no legitimate containerized application should be connecting to the link-local IMDS address. Flag it every time.
GKE's Workload Identity binding means service account token misuse looks slightly different. An attacker who reads the projected service account token and makes GCP API calls will show up in Cloud Audit Logs, but only at the API plane. The kernel-level read of the token file itself is what eBPF catches, before the attacker has made any API calls at all.
Practically speaking: the syscall-level escape signals are platform-agnostic. The post-escape network and credential behavior is where EKS and GKE diverge.
The 4-8 Minute Window
This is the number we keep coming back to. In an unmonitored Kubernetes environment, container-escape-to-cluster-admin privilege escalation completes in 4-8 minutes. That's not a theoretical attack speed. That's what we've measured in controlled incident reconstructions with real techniques.
Most teams relying on cloud-provider billing anomalies or audit log review to catch runtime threats are working on an 11-day mean time to detect for cryptomining deployments. Days. Not minutes.
The only way to operate inside that 4-8 minute window is real-time kernel telemetry with behavioral detection that doesn't require an analyst to write and tune rules for every new technique. The baseline-plus-anomaly-scoring model gets you there without a dedicated threat analyst on staff.
Getting Started Without Disrupting Production
One concern we hear consistently: "Will deploying a DaemonSet on every node affect my workloads?" Fair question. Here's the honest answer.
eBPF probes are read-only at the kernel level. The probe observes system calls; it does not intercept or modify them. There is no in-line enforcement path that can cause application failures. The worst-case failure mode of the eBPF probe itself is that telemetry stops streaming, not that your application breaks.
In our experience, teams can deploy Kubesentry to a staging cluster, run through the 7-14 day baseline window, review the anomaly events, and be operationally confident before rolling to production. Full deployment to a mid-size Kubernetes environment typically takes under 4 hours from first helm install to first tactic-classified alert in Datadog or Splunk.
Container escape detection doesn't require a six-month security program. It requires kernel visibility, a behavioral baseline, and alerts that carry enough context to act on. That's the gap we built Kubesentry to close.
Want to see how Kubesentry detects container escape patterns in your EKS or GKE cluster? Request a demo and we'll walk through your specific environment.