Kubernetes DevSecOps Posture Management: Runtime vs. Configuration

Kubernetes DevSecOps security posture comparison between runtime detection and configuration scanning

When I joined the Falcon container security team from CrowdStrike, one of the first things I noticed was how often customers conflated two very different problems. They'd ask: "We have Wiz. Why do we still get compromised?" The answer, almost every time, came down to the same misunderstanding. Configuration scanning and runtime detection are not the same thing. Not even close.

The Configuration vs. Runtime Split

CSPM tools like Wiz, Lacework, or Prisma Cloud are exceptional at what they do. They photograph your Kubernetes cluster's configuration state: which namespaces have overpermissive RBAC, which container images carry known CVEs, which network policies are too wide, which pods run as root. That photograph is valuable. You need it.

But a photograph is a frozen moment. It does not tell you what is happening inside your containers right now, while they are running. And that is exactly where attackers operate.

Runtime posture is a different domain entirely. It is about behavioral detection: what system calls is this container making? Is this service account accessing Kubernetes API endpoints it has never touched in 30 days? Did a shell just spawn inside a container whose baseline profile includes zero exec events? Configuration scanning cannot answer those questions. Architecturally, it was never designed to.

In our experience talking with mid-size SaaS teams, roughly 60-75% of runtime security events are first discovered during post-incident forensics rather than in real time. That number should alarm you. It means the attacker already moved, already exfiltrated, already pivoted before your CSPM showed you anything was wrong.

What Each Layer Actually Catches

It helps to be concrete. Here is a rough breakdown of what each posture layer catches and what it misses:

Detection Layer Catches Blind Spot
CSPM (Wiz, Lacework) Misconfigured RBAC, exposed secrets, CVEs in images, public buckets Runtime behavior inside valid containers; post-deployment attacker activity
eBPF runtime detection (Kubesentry, Falco) Privilege escalation attempts, unexpected exec events, cryptomining, lateral movement via service account Static misconfiguration; CVEs in container images at rest
Both layers together Full kill-chain visibility from image build to container execution to lateral movement Almost nothing a well-resourced attacker can hide

Neither layer replaces the other. Fact: a container can be clean from a CVE perspective, deployed with correct RBAC, and still be used as a cryptomining staging ground within 4-8 minutes of a successful application-layer exploit. CSPM would show a green posture right up until the billing spike.

Why Mid-Size Teams Struggle with This

Enterprise organizations solved this years ago. They run Wiz for configuration posture and CrowdStrike Falcon or Sysdig Secure for runtime. They have the staffing to operate two tools and correlate findings across them. Most have dedicated security engineers who do nothing but hunt alerts all day.

Mid-size teams, by definition, are not that. A team with 1-3 DevSecOps staff running 20-50 Kubernetes namespaces in production has a different problem set. They cannot justify a $200K/year Falcon enterprise contract for runtime coverage on top of the CSPM they are already paying for. They do not have a threat-analyst to tune Falco rules from scratch. And they do not have time to build the enrichment pipeline that turns raw eBPF events into actionable, triage-ready alerts.

Here's the thing: the runtime detection problem is not technically harder at mid-size scale. It is operationally harder. Raw telemetry from an eBPF probe on a busy Kubernetes node produces thousands of events per second. Without workload-level behavioral baselining, you cannot distinguish normal application behavior from attacker activity. That baselining is the work enterprise security engineering teams have been doing internally for years. It just was not packaged for a 2-person DevSecOps team until recently.

The Baseline Window Is Non-Negotiable

When we built Kubesentry's detection engine, one of the early debates was around the baseline window length. How long does a Kubernetes workload need to run before you have a reliable behavioral profile?

The answer is 7-14 days. Not 24 hours, despite what some vendors will claim. In our data, workloads that run for fewer than 7 days before baselining is declared complete produce 3-5x the false-positive rate compared to workloads that have run for at least 10 days. The variance in system-call patterns across a full weekly traffic cycle is significant, especially for SaaS products with weekday-heavy usage patterns.

The practical implication: when you deploy runtime detection for the first time, plan for a 7-14 day learning period where you are collecting telemetry but not alerting on deviations. That is not a limitation. It is the cost of building a behavioral model that earns its alerts.

During that window, here is what you should do in parallel:

  • Audit your RBAC RoleBindings and ServiceAccount scopes across all namespaces. CSPM will surface the egregious ones; manual review catches the subtle ones.
  • Map your expected outbound network destinations per workload. Kubesentry will learn these from telemetry, but having a reference list helps you validate the baseline faster.
  • Identify your highest-risk namespaces: anything running customer-data-adjacent workloads or with access to Secrets should be prioritized for alert tuning.

Where MITRE ATT&CK Context Changes the Game

One of the reasons I am skeptical of raw Falco rules as a long-term runtime strategy for mid-size teams is the lack of tactic context. Falco fires an event. You see: "A shell was spawned in a container." Now what? Is this the on-call engineer running kubectl exec to debug a prod issue, or is it a post-exploitation shell from a container escape? The event alone does not tell you.

MITRE ATT&CK for Containers tactic classification changes the triage calculus. If that exec event is tagged Execution + Privilege Escalation, and the same pod had a Credential Access event 90 seconds earlier, you are looking at a kill chain. Period. The investigator does not need to manually reconstruct the sequence from raw logs; the context ships with the alert.

In practice, for a 2-person DevSecOps team getting paged at 2am, the difference between "shell spawned in container" and "Privilege Escalation: shell spawned in a container with no prior exec baseline, following a Credential Access event in the same pod" is the difference between spending 45 minutes investigating a false positive and spending 5 minutes confirming a real incident and initiating pod isolation.

Wiz Integration: Closing the Loop

Here is a workflow pattern we see working well for teams that already have Wiz deployed. When Kubesentry fires a runtime alert, it pulls enrichment from the Wiz Security Graph API to add the vulnerability posture of the affected container image. You end up with a view like this:

"Suspicious exec in payments-api pod, namespace prod-core, node ip-10-0-12-44. MITRE: Execution (T1059). Container image: payments-api:v2.14.3 — Wiz: 2 critical CVEs, 1 unpatched RCE in bundled libssl. Recommended action: isolate pod, rotate service account token."

That alert earns its page. The CVE context from Wiz tells you that the exec event may be the exploitation of a known vulnerability in that specific image version, not a misconfigured kubectl access by a developer. The confidence level on the alert goes up, and the remediation path gets specific.

Honestly, this is the integration story the industry has been missing. Runtime telemetry and configuration posture both exist, but they have lived in separate consoles. The value is in the correlation.

Where to Start: A Practical Sequence

If you are a mid-size SaaS team that has CSPM coverage but no runtime detection, here is the sequence I would recommend:

  1. Deploy the eBPF DaemonSet and run silent for 7-10 days. Let the baseline build. Do not alert yet. Just collect.
  2. During the baseline window, clean up your RBAC. Overpermissioned service accounts are the primary lateral-movement vector in Kubernetes. Fix the most egregious ones before runtime alerting goes live, or you will have a wave of Credential Access alerts on day 1 that obscure real threats.
  3. Enable alerting in high-risk namespaces first. Not everywhere simultaneously. Pick your top 3-5 namespaces by data sensitivity and enable detection there. Build confidence in the alert fidelity before rolling out cluster-wide.
  4. Set up PagerDuty routing for high-severity alerts only. Kubesentry classifies by severity. Page on Execution + Privilege Escalation + Lateral Movement combinations. Log and review lower-severity events asynchronously.
  5. Wire in Wiz enrichment. The context improvement on alert quality is worth the 30-minute integration setup.

Total deployment time from DaemonSet to first production alert: about 4 hours. The 7-10 day baseline window is not setup time. It is the engine learning your environment. You can go home after the first 4 hours.

The Posture You Actually Need

Complete Kubernetes DevSecOps posture is not CSPM or runtime detection. It is both, with the enrichment bridge between them. CSPM tells you where your configuration is drifting toward risk. Runtime detection tells you when an attacker is actively exploiting that risk or any other vector, including the ones your CSPM scan has not flagged yet.

The teams that get compromised are not the ones that skipped both layers. They are the ones that deployed CSPM, got comfortable with the green dashboard, and assumed they were covered. Our data shows that assumption costs an average of 11 days of undetected attacker dwell time in a missed namespace. That is enough time for a cryptominer to turn into a data exfiltration.

The good news: runtime detection scaled to Kubernetes is no longer an enterprise-only capability. It just needs to be packaged for a team that cannot afford to spend months on it.

See how Kubesentry's eBPF runtime detection integrates with your existing CSPM setup. Request a demo and we'll walk through your current posture gaps.

Related Articles