Why Falco Rules Alone Are Not Enough for Mid-Size DevSecOps Teams

Falco rules output and alert noise reduction strategy for DevSecOps teams

I spent four years building eBPF-based threat detection with Sysdig, and a large portion of that time was devoted to one persistent problem: Falco rules that enterprise customers deployed, got overwhelmed by, and quietly disabled. The rules were correct. The detection logic was sound. But for teams without a dedicated threat analyst sitting in front of a SIEM all day, raw Falco output is often unworkable. Here is what we have learned about where Falco rules genuinely shine, where they fall apart at mid-size scale, and what a behavioral baseline layer actually changes.

What Falco Rules Get Right

Falco does something that almost no other security tool in the Kubernetes space does at its price point: it watches system calls. Not log lines. Not configuration drift. Not Kubernetes API audit events alone. Actual kernel-level syscalls from every container on the node.

That matters enormously for a specific class of threats.

Privilege Escalation via Syscall Patterns

A container escape that uses a kernel exploit will typically call ptrace(), clone() with elevated flags, or attempt a mount() syscall into a privileged namespace. Falco catches those. A static CSPM scanner looking at your cluster's configuration state sees nothing, because the attack is happening inside a technically valid container image with technically valid permissions. Falco's default rules for detecting PT_ATTACH ptrace calls and unexpected privilege changes via setuid sequences are among the most reliable threat detections available to a team at any budget level.

In our experience, the standard Falco ruleset detects container escape precursors (attempts to write to sensitive host paths, unexpected exec of shell binaries inside containers that should never have shells) with a true-positive rate above 85% when tested against known techniques. That is genuinely good for a static rule library.

Known-Bad Process Execution

Falco excels at matching against known-bad patterns: xmrig execution, reverse shell via nc or bash -i, credential dumps via cat /etc/shadow. If an attacker runs a command that appears verbatim in a Falco rule, you will hear about it within seconds. For cryptomining in particular, stratum protocol connection patterns and specific xmrig argument signatures are well-documented in community rulesets and they work.

Where Falco Falls Short for Mid-Size Teams

Here is the thing: most attacks do not announce themselves with known-bad binaries. The more damaging incidents we tracked with Sysdig involved attackers operating entirely within the expected toolchain of the compromised container. And that is where static rules break down.

The Rule Maintenance Burden Is Underestimated

Falco ships with a community ruleset of roughly 80 default rules. That sounds manageable. In practice, a mid-size SaaS team running 20-50 Kubernetes workloads will fire false positives on 15-25 of those default rules within the first week of deployment. Services that legitimately call exec for health checks, containers that write to paths Falco considers sensitive, CI runners that clone repositories at runtime. Every false positive either gets tuned by hand or gets disabled entirely.

We have seen this pattern consistently. A 2-person DevSecOps team deploys Falco, gets 400 alerts in the first 48 hours, spends a week tuning, gets it down to 60 alerts per day, declares victory, and then stops reviewing them because 60 alerts per day is still too many for two people to triage alongside everything else they own. Three months later, the alert queue is effectively unmonitored.

Rule maintenance is a continuous job. Falco requires it. Full stop.

No Behavioral Baseline: The Core Gap

Static rules cannot distinguish between a curl call a container has always made and a curl call a container started making after it was compromised. Both look identical at the syscall level if the attacker is smart enough to use the tools already present in the image.

This is the detection gap that static rules fundamentally cannot close. An attacker who compromises a Node.js web service and uses that service's existing curl binary to exfiltrate data to an external IP will not match any Falco rule, because curl in a Node.js container is not a known-bad binary. It is a legitimate tool used in a novel way.

To catch that, you need a baseline of what normal looks like for that specific workload and the ability to flag deviation from it.

False-Positive Rate Without Continuous Tuning

Fact: Falco's default rules, deployed untuned against a real SaaS Kubernetes cluster, will produce 200-600 alert events per day depending on workload density. Our data shows mid-size teams can realistically investigate 15-30 alerts per day given other operational responsibilities. The math does not work.

The most common response is rule suppression. Teams add except clauses, disable entire rule categories, narrow scope until the alert volume becomes manageable. That process is iterative, requires deep familiarity with each workload's normal behavior, and needs to be repeated every time a service deploys a new image version that changes its runtime profile.

What Kubesentry's Falco Rules Compatibility Layer Adds

We built the Falco rules compatibility layer specifically because many teams already have Falco rules they trust, either from the community ruleset or from their own tuning work. The goal was not to replace those rules. It was to make them manageable.

Behavioral Baseline as the Primary Filter

Kubesentry runs a 7-14 day baseline window before alerting. During that window, we learn each workload's normal syscall profile: what it execs, where it writes, which network destinations it connects to, what its service account token usage looks like relative to its declared RBAC bindings. After baseline, alerts fire when behavior deviates from that profile, not just when it matches a static pattern.

This changes the false-positive economics fundamentally. A container that legitimately runs curl during normal operation will have that pattern in its baseline. A curl call to a new outbound IP after a compromised deployment will not. The first alert is suppressed. The second fires.

In our testing against real mid-size SaaS environments, baseline filtering reduces Falco rule alert volume by 60-75% in the first 30 days after baseline completion. Teams that were seeing 300 alerts per day get to 80-120. Teams that were seeing 80 get to 20-30. Honestly, that is the range where two DevSecOps engineers can actually review everything.

Falco Rules Inside the Kubesentry Pipeline

Kubesentry's compatibility layer runs your existing Falco rules against the same eBPF telemetry stream, but before an alert surfaces in the console or gets routed to PagerDuty, it passes through the behavioral context filter. A Falco match against a sensitive write path in a container whose baseline includes that write path gets suppressed. A Falco match in a container that has never written there generates an alert with both the Falco rule name and the baseline deviation score attached.

The output is exportable to Sysdig Secure for teams that already have Sysdig in their stack. MITRE ATT&CK tactic classification is added inline, so every alert that does reach an analyst already has the tactic context attached without requiring a separate enrichment pipeline.

What This Does Not Fix

Real talk: a 7-14 day baseline window means you are not protected against novel threats during the learning period. If a compromise happens on day 3 of baseline, the malicious behavior will be incorporated into the profile. We mitigate this by running Falco's known-bad rules without baseline suppression for high-severity patterns (privilege escalation, known cryptomining signatures, shell spawns in containers that declare no shell in their image manifest). But the behavioral approach is not a substitute for the first two weeks of raw Falco coverage.

Additionally, behavioral baselines need to be recalculated when you deploy a new image version with legitimately different behavior. Kubesentry handles this automatically on deployment events from the Kubernetes API, but there is a 24-48 hour re-learning window per workload where sensitivity is reduced. Plan deployments accordingly if you are running in a high-threat environment.

The Practical Takeaway

Falco is not broken. The rules are solid. The gap is operational: static rules generate alert volume that a 2-3 person DevSecOps team cannot sustain, and the tuning work required to bring that volume down is continuous and underdocumented.

A behavioral baseline layer does not replace Falco rules. It makes them usable for teams that do not have a dedicated threat analyst to maintain them. That is the majority of mid-size SaaS companies.

If you are running Falco today and have disabled more than 30% of the default rules to manage noise, that is the clearest signal that you need a behavioral context layer. The detection surface you disabled contains real threats.

Want to see how Kubesentry's behavioral baseline works against your existing Falco deployment? Request a demo and we will walk through your current alert volume and what baseline filtering would change.

Related Articles

eBPF Kubernetes runtime security
Engineering

eBPF Runtime Security in Kubernetes: How Kernel-Level Telemetry Works

Container escape detection on EKS and GKE
Security

Detecting Container Escapes on EKS and GKE: What the Logs Miss

Kubernetes service account abuse detection
DevSecOps

Kubernetes Service Account Abuse: How Attackers Move Laterally Through RBAC