← Back to Blog

Is Your Security Operation Just Merely a Compliance Operation?

28 April 2026 Security Operations 9 min read
THE QUESTION NOBODY ASKS IN THE BOARD PACK COMPLIANCE-DRIVEN THREAT-DRIVEN WHY DEPLOYED "The framework requires it" WHY DEPLOYED "This attack path is undetected" WHAT TRIGGERS CHANGE Audit finding or contract renewal WHAT TRIGGERS CHANGE Incident, gap analysis, or threat intel DETECTION RULES Vendor defaults, never tuned DETECTION RULES Written for the environment, tested, tuned INCIDENT RESPONSE "We have a plan" (untested, filed away) INCIDENT RESPONSE Playbooks exercised, tabletops run, gaps closed SUCCESS METRIC Audit passed, certificate renewed SUCCESS METRIC MTTD reduced, coverage gaps closed Fails when the attacker ignores the framework Adapts because it's built around the adversary

Organizations spend billions on security technologies annually. The budgets are real, the tools are deployed, and the dashboards look impressive. But sit with a security leader for twenty minutes and ask them to walk you through the decision chain behind each major technology in their stack, and a pattern emerges that most people don't want to name.

"We deployed Sentinel because our ISO 27001 certification requires centralised log management." "We implemented MFA because Cyber Essentials mandates it." "We purchased endpoint protection because the client contract specifies it." "We added XDR because the insurer asked for it on the renewal questionnaire."

The technology in each case is legitimate. The capability is real. But the decision that put it there wasn't driven by a threat assessment or a gap analysis or an incident that exposed a weakness. It was driven by a checkbox on someone else's form. And that distinction — the why behind the deployment — determines whether what you're running is a security operation or a compliance operation wearing security's uniform.


The procurement decision chain tells the truth

Every security technology in your environment got there through a decision chain. Someone identified a need, someone approved a budget, someone selected a vendor, someone deployed the tool. The question is what triggered that chain.

In a compliance-driven operation, the chain starts externally. A framework names a control. A client contract requires a capability. A regulator publishes a requirement. An insurer adds a question to the renewal form. The organization responds by procuring a tool that satisfies the requirement. The tool is deployed, typically with default configuration, because the requirement was "have this capability" not "configure this capability to detect the specific attacks targeting your environment." The auditor verifies the tool exists. The checkbox is marked. The procurement chain is complete.

In a threat-driven operation, the chain starts internally. An incident reveals a detection gap. A purple-team exercise proves a rule doesn't fire against a common technique. A threat intelligence report identifies a campaign targeting the industry. A coverage assessment finds an entire ATT&CK tactic with zero detection rules. The team responds by engineering a solution — writing detection rules, tuning configurations, building monitoring capabilities — specifically for the gap that was identified. The success criterion isn't "tool exists" but "gap closed."

The procurement chain is visible if you look. Pull up your last five significant security purchases. For each one, trace the decision back to its origin. Was the trigger internal (we found a gap, we need this) or external (the framework requires it, the client demands it, the auditor flagged it)? The ratio tells you what kind of operation you're running.


The vendor default problem

Here's where compliance-driven procurement creates a specific, measurable failure. When a tool is deployed to satisfy a compliance requirement, the incentive is to get it deployed, not to get it working effectively. The requirement says "centralised log management with alerting." It doesn't say "detection rules tuned to your environment's specific attack surface with a false positive rate below 5%."

We see the result in real environments consistently. A Sentinel workspace with 400 analytics rules, every one of them from Microsoft's template gallery, deployed on day one and never touched again. The rules fire. The alerts generate. The dashboard shows an active SIEM with detection coverage. And the false positive rate is 80% or higher, because generic rules written for every Microsoft customer on earth aren't tuned for any specific one.

The SOC analysts — if there are SOC analysts, and not just a managed service provider checking a different compliance box — learn to ignore the noise. The alerts that matter are buried in the alerts that don't. The detection capability exists on paper and fails in practice. But the audit requirement is met, because the requirement was "have a SIEM with alerting" not "have a SIEM that detects attacks."

This isn't a vendor problem. Microsoft, Splunk, Elastic — they all ship sensible defaults. The defaults are starting points. The problem is that compliance-driven procurement treats starting points as destinations.


What actually happens during an incident

The gap between compliance-driven and threat-driven security becomes concrete during an incident. Not hypothetically — specifically, in the first hours of a real breach.

Consider a common scenario: an AiTM credential phishing attack. The attacker proxies the authentication flow, captures the session token, and authenticates as the victim. MFA was deployed, was active, and was bypassed — because MFA protects against password-only attacks, not against session token theft. Conditional Access was deployed but didn't require device compliance for the application the attacker targeted. The SIEM had a rule for "impossible travel" but the attacker authenticated from the same country. The email gateway flagged the phishing email as suspicious but the user clicked the link from their mobile device, which routes through a different gateway with a different policy.

Every control was present. Every compliance requirement was met. MFA: deployed. SIEM: active. Email protection: configured. Conditional Access: enabled. The audit would have passed on Monday. The breach happened on Tuesday.

Now look at what a threat-driven operation does differently with the same tools. The Conditional Access policies are configured not just for MFA but for device compliance, sign-in risk, and token lifetime restrictions — because the threat model identified AiTM as a primary risk. The SIEM has custom detection rules for token replay patterns, specifically the telemetry signature of an AiTM proxy — because the detection engineering team wrote rules against the actual attack technique, not just the framework's named controls. The SOC has a specific playbook for post-AiTM investigation — revoke sessions, audit mailbox rules, check for OAuth consent grants — because the team exercised this scenario in a tabletop.

Same tools. Same budget. Radically different outcome. The difference is whether the configuration was driven by "what does the framework require" or "what does the adversary do."


The change trigger diagnostic

If you want a simple test to determine where your operation sits, ask one question: what causes your security team to change something?

In a compliance-driven operation, the triggers are external and periodic. An audit finding requires remediation by a deadline. A framework update adds new controls. A contract renewal includes new security requirements. A regulatory change mandates a new capability. The team acts when someone outside the organization requires action, and the cadence follows the audit cycle — annual assessments, quarterly reviews, contract renewals.

In a threat-driven operation, the triggers are internal and continuous. An incident revealed that a detection rule didn't fire. A purple-team exercise proved that a common credential-access technique produced no alert in any SIEM. A threat intelligence report identified a new campaign using OAuth consent phishing against organizations in your sector. A monthly coverage assessment found that the Discovery tactic has twelve techniques and your detection rules cover three of them.

The threat-driven operation changes when the threat landscape demands it, which is constantly. The compliance-driven operation changes when the audit cycle demands it, which is annually.

Both operations update. The question is what drives the update and how quickly the operation responds to new information about the adversary. A compliance-driven operation that takes twelve months to respond to a new attack technique — because the framework hasn't been updated, the audit hasn't been scheduled, the budget hasn't been approved — has a twelve-month window during which the adversary operates freely.


The honest conversation about frameworks

None of this is an argument against frameworks. ISO 27001, NIST CSF 2.0, Cyber Essentials, SOC 2, CIS Controls — these frameworks represent decades of accumulated operational wisdom distilled into structured requirements. They provide a shared language for security capability. They set a baseline that prevents the most basic failures. They give organizations a structured path from "we have nothing" to "we have a defensible program."

The problem isn't that organizations use frameworks. The problem is what happens after the framework is implemented.

In a healthy security program, the framework is the floor. It sets the minimum standard. Above the floor, the threat-driven operation engineers the capabilities that the framework can't prescribe — because frameworks are general and threats are specific. ISO 27001 says "implement monitoring." It can't tell you which ATT&CK techniques to write detection rules for, because that depends on your infrastructure, your industry, and the adversary groups that target organizations like yours.

In an unhealthy security program, the framework is the ceiling. The audit pass is the success metric. Once the certificate is renewed, the program enters maintenance mode until the next audit cycle. The team's operational rhythm follows the auditor's schedule, not the adversary's.

The framework tells you what categories of capability to have. Your threat model tells you how to configure those capabilities for your environment. If you have the first without the second, you have a compliance operation.


The budget reveals the priority

Look at where the security budget goes and you'll see whether the operation is compliance-driven or threat-driven.

Compliance-driven budget allocation is tool-centric. The budget funds tool licenses, managed service contracts, and audit preparation. The GRC team gets headcount. The detection engineering team — if it exists — doesn't. The budget cycle follows the compliance calendar: tool renewals in Q1, audit preparation in Q3, remediation spend in Q4. Training budget goes to certification prep for the frameworks the organization is assessed against.

Threat-driven budget allocation is capability-centric. Tool licenses are part of the spend, but so is detection engineering time, purple-team exercises, threat intelligence analysis, and incident response rehearsals. The budget funds people who configure, tune, and validate the tools — not just the tools themselves. Training budget goes to operational skills: detection engineering, investigation methodology, threat hunting.

An organization that spends $250,000 on a SIEM license and $0 on detection engineering has a compliance program. An organization that spends $250,000 on a SIEM license and $100,000 on a detection engineer who writes and tunes rules has a security program. The tools are the same. The investment in making them work is the difference.


The managed SOC question

Managed SOC providers occupy an interesting position in this discussion. Many organizations outsource their security monitoring to satisfy a compliance requirement — "we need 24/7 monitoring" — and the managed SOC satisfies the checkbox. But the quality of managed SOC services varies enormously, and the compliance-driven procurement pattern often selects on price rather than capability.

A compliance-driven organization selects a managed SOC provider that meets the contractual requirements at the lowest cost. The contract specifies SLAs for alert response time. It rarely specifies detection rule quality, tuning cadence, false positive rates, or coverage against specific ATT&CK techniques. The managed SOC monitors the alerts the default rules generate. The compliance requirement is met.

A threat-driven organization selects a managed SOC provider based on detection capability. The evaluation includes: what rules do they deploy, how do they tune for our environment, what's their false positive rate, how do they handle detection gaps, do they write custom rules, do they participate in purple-team exercises? The contract specifies operational outcomes, not just response times.

The managed SOC market is full of providers who exist because compliance requirements created demand for monitoring-as-a-checkbox. Organizations that select on compliance criteria get compliance-grade monitoring. Organizations that select on operational criteria get meaningfully different outcomes.


Moving from compliance to threat-driven

If you've read this far and recognized your operation in the compliance-driven column, the path forward doesn't require dismantling anything. The compliance foundation is real and valuable. The tools are deployed. The frameworks are implemented. You're starting from a position of "we have the technology but haven't optimized it for the adversary."

The transition is additive, not replacive. You keep the compliance program running — the audits, the certifications, the contractual requirements. On top of that, you build the threat-driven layer: detection rules written for your environment, tested against real techniques, tuned for your false positive profile. Incident response playbooks exercised against realistic scenarios. Coverage assessments that measure what your detections actually catch, not just how many rules are deployed.

The first step is the diagnostic. Audit your security stack against the question: was this deployed for compliance or for the adversary? The honest answer tells you where you are. The gap between where you are and where you want to be is the roadmap.


What to do this week

Audit your SIEM rules. Pull up your SIEM analytics rules. Count how many are vendor defaults versus custom rules written for your environment. The ratio is the single most telling metric.

Trace your procurement. Pick your three most expensive security tools. For each one, write down the original procurement trigger. Framework requirement? Client contract? Insurance questionnaire? Threat assessment? Internal gap analysis?

Ask the SOC. Ask your SOC team or managed provider: when was the last time a detection rule was added or changed because of something the team observed or tested, rather than because an audit or framework required it?

Test your IR plan. Check your incident response plan. When was it last exercised — not reviewed on paper, but actually walked through with the team against a realistic scenario?

Validate one thing. If the answers point toward compliance, start with one concrete action: take your top-5 most critical detection rules (the ones that should catch the attacks most likely to target your organization) and validate that they actually fire. Run the technique in a lab, check whether the alert generates. That single exercise moves you from "we have rules" to "we know our rules work."


By Ridgeline Cyber

Related posts:

Going deeper: If this post resonated, two courses on the training platform address each side of the problem. Practical GRC builds the compliance foundation — frameworks, policies, audit preparation — done properly so you have a defensible floor. Detection Engineering builds the threat-driven detection program on top of it — writing rules for your environment, testing them against ATT&CK techniques, and measuring coverage. Both have free starting modules.

Ridgeline Cyber Defence Written by security practitioners. Published weekly on Tuesdays.

Get security ops insights weekly

One email every Tuesday. Detection techniques, investigation methods, and operational security. Unsubscribe anytime.

Related Articles

30 April 2026

The First 15 Minutes of a BEC Investigation Determine Everything That Follows

The queries and evidence sources you check in the first 15 minutes of a business email compromise determine whether you

28 April 2026

Registered Isn't Compliant: The Conditional Access Gap Attackers Use After Token Theft

After an AiTM token theft, the attacker's next move is often to register their own device to your tenant. Here is how to

21 April 2026

AI Won't Take Your Security Job. But Someone Who Uses AI Will.

The honest answer to 'will AI replace SOC analysts' is more uncomfortable than either side admits. The job isn't disappe

Ridgeline Training

Want to go deeper?

Hands-on courses covering Security Operations with labs, deployable artifacts, and free foundation modules.

Detection Engineering → Practical GRC →