In this module
PT0.2 The Purple-Team Mindset
You've seen detection rules fail — rules that were deployed, active, and producing zero alerts for months until a pen test or a real attacker exposed the gap. You've probably tuned a rule at least once and wondered whether the tuning removed real signal along with the noise. This sub names the systemic reasons behind those failures and introduces the discipline that prevents them from recurring.
Figure PT0.2 — The purple-team validation loop. Write the rule, fire the attack, watch what fires, tune. Repeat on a daily/weekly/monthly/quarterly cadence.
The tabletop that should make you uncomfortable
A team runs a tabletop exercise. The scenario is ransomware — contractor laptop compromised, attacker pivots to a file server, encrypts, drops a ransom note. The team walks through the response. Detection at step 4. Containment at step 6. The exercise wraps up: "good news — we'd catch this in production."
You ask: "Did anyone actually run the attack?"
No. The detection at step 4 was the rule someone wrote three years ago. Nobody's tested it. The detection is plausible. It is not evidenced. Purple teaming closes the gap between plausible and evidenced. This sub shows you the five systemic forces that create the gap.
Force 1: Vendor telemetry shifts
Microsoft publishes schema updates to Defender XDR tables roughly monthly. Most are additive. Some are breaking. Here's a real example from DeviceProcessEvents:
// Rule written in November — works
DeviceProcessEvents
| where TimeGenerated > ago(1h)
| where ProcessCommandLine has "sekurlsa"
| project TimeGenerated, DeviceName, ProcessCommandLine,
AccountName, InitiatingProcessFileNameIn February, Microsoft renamed ProcessCommandLine to InitiatingProcessCommandLine for consistency across the Advanced Hunting schema. The old column still exists but returns empty for new events. The rule loads. It runs. It matches nothing. Here's the fixed version:
// Rule fixed in February — note the column name change
DeviceProcessEvents
| where TimeGenerated > ago(1h)
| where InitiatingProcessCommandLine has "sekurlsa"
| project TimeGenerated, DeviceName, InitiatingProcessCommandLine,
AccountName, InitiatingProcessFileNameOne column rename. The Sentinel health workbook shows both rules as active. Only the second one produces results. The only way to find the break: fire the technique, check whether the alert appears. Five-minute fix. Six-month exposure if nobody tests.
You can check your own environment right now. Run this query in your Sentinel workspace:
// Heartbeat check — does this column still return data?
DeviceProcessEvents
| where TimeGenerated > ago(24h)
| where isnotempty(ProcessCommandLine)
| countIf the count is zero and you have active Windows endpoints forwarding telemetry, every rule that matches on ProcessCommandLine is broken.
Force 2: Attacker tools evolve faster than rules
PT0.1 showed this with Mimikatz vs procdump — the Sigma rule matching on SourceImage|endswith: '\mimikatz.exe' misses procdump entirely. Here's the broader picture. This is the credential dumping tool landscape as of 2026:
Tool Sysmon Event 10 SourceImage GrantedAccess
─────────────────────── ────────────────────────────────────── ─────────────
mimikatz.exe C:\...\mimikatz.exe 0x1010
procdump64.exe C:\...\procdump64.exe 0x1FFFFF
rundll32.exe + comsvcs C:\Windows\System32\rundll32.exe 0x001F0FFF
NanoDump C:\...\nanodump.exe (or in-memory) 0x0040 (!)
PowerShell reflective C:\Windows\System32\wsmprovhost.exe 0x1010
secretsdump.py (remote) No local Sysmon event — runs over DCE/RPCA rule that matches on \mimikatz.exe catches exactly one row. A rule that matches on LSASS as the target with broad access rights catches the first five. A rule that monitors for DCE/RPC-based DCSync catches the sixth. Six tools. Three different detection approaches. A team that tested one variant and declared coverage has 17% coverage.
The course walks each variant. By Module 7 you'll have fired all six against your lab and written rules that catch each one — or documented which ones your telemetry can't see and why.
Force 3: Environments diverge
A rule's exception logic was written for the dev environment. Here's the KQL that works in dev:
// Dev environment — one domain controller named DC01
DeviceProcessEvents
| where TimeGenerated > ago(1h)
| where FileName =~ "lsass.exe"
| where InitiatingProcessFileName !in~ ("MsMpEng.exe", "csrss.exe")
| where DeviceName !startswith "DC" // Exclude domain controller activityThe !startswith "DC" filter works when the domain controller is named DC01. Production has four domain controllers: MELDC1, LONDC1, BRSDC1, SHFDC1. None of them start with DC. The filter excludes nothing. The rule fires on every legitimate Kerberos ticket-granting event from every domain controller — four thousand false positives a day. Within a week, the SOC adds this:
// "Fix" applied by the SOC under pressure
| where DeviceName !in~ ("MELDC1","LONDC1","BRSDC1","SHFDC1")This works until the fifth domain controller (DUBDC1) is deployed six months later. The SOC's fix doesn't include it. Legitimate activity from DUBDC1 triggers the rule again. The cycle repeats. The correct fix is a watchlist:
// Production-grade fix — watchlist of domain controllers
let DomainControllers = _GetWatchlist('DomainControllers')
| project DeviceName;
DeviceProcessEvents
| where TimeGenerated > ago(1h)
| where FileName =~ "lsass.exe"
| where InitiatingProcessFileName !in~ ("MsMpEng.exe", "csrss.exe")
| where DeviceName !in (DomainControllers)The watchlist updates when new domain controllers are deployed. The rule doesn't need to change. But the team that tested the rule in dev with DC01 would never discover the problem without testing in an environment that looks like production.
Force 4: Telemetry pipelines break silently
The Azure Monitor Agent on a server stops forwarding. Here's the KQL heartbeat query that catches it:
// Heartbeat — which devices stopped sending events?
Heartbeat
| where TimeGenerated > ago(24h)
| summarize LastSeen = max(TimeGenerated) by Computer
| where LastSeen < ago(4h)
| project Computer, LastSeen,
HoursSilent = datetime_diff('hour', now(), LastSeen)
| order by HoursSilent descIf this query returns results, those devices are not forwarding telemetry. Every rule that matches on events from those devices has zero coverage — regardless of what the deployment dashboard shows.
Purple teaming catches this as a side effect. When you fire T1003.001 on a target host and the Sysmon Event 10 never lands in Sentinel, the first thing you check is whether the agent is forwarding. The technique test surfaces the pipeline break.
The course includes this heartbeat check in the Module 1 smoke test. You'll run it against your own lab to confirm the pipeline works before starting the technique subs.
Force 5: Tuning was done once
A rule was tuned to exclude a backup service that produces the same telemetry as ransomware staging. Here's the tuning entry:
// Tuning applied on deployment — excludes Veeam backup
| where InitiatingProcessFileName != "VeeamAgent.exe"Six months later, the team deploys a new monitoring tool — MonitoringAgent.exe — that also writes large batches of files and triggers the rule. The SOC, under pressure from alert volume, adds:
// Second tuning pass — adds monitoring tool
| where InitiatingProcessFileName !in~
("VeeamAgent.exe", "MonitoringAgent.exe")The monitoring tool's process name happens to be the same string that a known ransomware dropper uses (MonitoringAgent.exe is a common masquerade name). The exclusion now suppresses both the legitimate tool and the real attack. Coverage: zero for that variant. Nobody notices because the rule stopped being noisy — which is what the tuning was supposed to achieve.
The fix is reviewing every exclusion periodically and testing whether the excluded processes overlap with known attack tool names. The Tuning Loop element in every course sub includes this check. When you fire the technique and the rule fires, you also review the exclusion list against the current false-positive landscape.
The three changes
The five forces above act on every detection programme. The mindset that counters them changes three things about how you operate.
You stop trusting the dashboard. The dashboard is a configuration view. The evidence view is the last test result. Run this query in your Sentinel workspace and compare the two numbers:
// Deployed rules
SecurityAlert
| where TimeGenerated > ago(90d)
| summarize DeployedRules = dcount(AlertName)
// Rules that actually fired in the last 90 days
SecurityAlert
| where TimeGenerated > ago(90d)
| where AlertSeverity != "Informational"
| summarize FiredRules = dcount(AlertName)The first number is usually 200+. The second is usually under 30. The gap is the work.
You stop accepting "we have a rule for it" as a sufficient answer. Either the rule has been tested in the last quarter — against the actual technique, with the actual telemetry, in the current environment — or the rule is a hypothesis.
You measure differently. Coverage is not deployed rules. Coverage is rules that fired successfully against an actual technique test in the last 90 days.
The continuous rhythm — demonstrated
Here's what each cadence level looks like in practice.
Daily: one atomic test, 15 minutes. You pick a technique. You fire it. You check the SIEM. You log the result. Here's the actual command:
# Daily atomic test — T1059.001 PowerShell execution
Invoke-AtomicTest T1059.001 -TestNumbers 1
# Expected: Sysmon Event 1 (process creation) with
# CommandLine containing "powershell" + encoded command
# Check Sentinel:// Did the rule fire?
SecurityAlert
| where TimeGenerated > ago(15m)
| where AlertName has "PowerShell"
| project TimeGenerated, AlertName, AlertSeverityIf the query returns a row, the rule fired. Log it. If it returns nothing, the rule is broken or the pipeline is down. Investigate.
Weekly: technique-of-the-week, 90 minutes. One ATT&CK technique walked end-to-end. All variants. All relevant environments. All three SIEMs. Here's what the VECTR entry looks like after a weekly test:
VECTR Entry — Week 12
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Technique: T1003.001 — LSASS Memory
Date tested: 2026-04-22
Environment: Windows endpoint (DESKTOP-NGE042)
Variants run: mimikatz, procdump, comsvcs.dll, NanoDump
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Results:
mimikatz → Sentinel: FIRED (4s) | XDR: FIRED (2s) | Splunk: FIRED (8s)
procdump → Sentinel: FIRED (4s) | XDR: FIRED (3s) | Splunk: FIRED (9s)
comsvcs.dll → Sentinel: FIRED (5s) | XDR: FIRED (2s) | Splunk: MISSED
NanoDump → Sentinel: MISSED | XDR: MISSED | Splunk: MISSED
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Coverage: 3/4 variants (75%)
MTTD: 2-9 seconds (where detected)
FPs observed: 2 (VeeamAgent backup, SCCM client scan)
Action: NanoDump → added to remediation backlog (priority: high)
Splunk comsvcs rule → missing, write this weekThat's one week's output. By the end of the course, you'll have fifty-two entries like this — one per technique sub — plus the chain emulations and coverage assessments.
Monthly: chain emulation, 1 day. Three to five techniques chained into a realistic sequence. You fire initial access, then credential theft, then lateral movement, then collection, then exfiltration. You check after each step: did the chain produce a correlated incident? Did the SOC workflow catch the correlation?
Quarterly: coverage assessment, half a day. ATT&CK Navigator heatmap with evidence-backed status per technique. Here's what the colour coding means:
ATT&CK Navigator — Coverage Status
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
GREEN Validated — tested in last 90 days, rule fired, tuned
YELLOW Partial — tested but some variants missed, or FP rate >5%
ORANGE Deployed but untested — rule exists, never validated
RED No coverage — no rule, or rule confirmed broken
GREY Out of scope — technique not relevant to this environment
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Target: move techniques from ORANGE → GREEN
The gap between ORANGE and GREEN is where every purple-team cycle worksThe quarterly report presents this heatmap to leadership. The numbers are defensible because every GREEN square has a test result behind it. The capstone (Module 14) is a complete version of this assessment applied to CHAIN-HARVEST.
The defensive deliverable
Everything you do in this course produces a defensive outcome. The artefacts from every sub:
A tested rule. Not a template. A rule you've fired the technique against and confirmed works — or documented doesn't work, with the reason and the remediation plan.
A false-positive profile. Here's a concrete example from the LSASS rule:
FP Profile — LSASS Access Detection (Multi-Variant)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Environmental FPs:
VeeamAgent.exe → backup credential validation (exclude by hash)
MsMpEng.exe → Defender real-time scan (already in filter)
SCCM client → software inventory (exclude by parent: CcmExec.exe)
Rule-logic FPs:
None observed at current threshold
Benign TPs:
IT admin running procdump for crash dump (legitimate use, close alert)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Baseline FP rate: ~3/week after tuning
Review cadence: monthly (check for new software deployments)A coverage record. Per technique, per environment, per SIEM. Logged in VECTR, reflected in the Navigator heatmap.
A populated programme template. By Module 14, the template contains real data from real tests — MTTD per technique, FP rate per rule, coverage percentage with evidence, remediation backlog with priorities.
That's the mindset. I do not know whether my detections work until I have personally fired the attack against them. The course is the rhythm that makes this sustainable.
You've built the lab and understand the validation gap.
Module 0 showed you why detection rules fail silently — vendor schema changes, attacker tool evolution, environment divergence, tuning drift. Module 1 gave you a working four-environment, three-SIEM purple-team lab. From here, you walk the kill chain technique by technique.
- 61 ATT&CK techniques across 12 tactic modules — Initial Access through Impact, each walked end-to-end with attack commands, annotated telemetry, and multi-SIEM detection rules
- Every detection in four formats — Sigma rule (canonical), Sentinel KQL, Defender XDR Advanced Hunting KQL, and Splunk SPL or Elastic. Tabbed side-by-side in every technique sub
- Module 14 Capstone — CHAIN-HARVEST — full purple-team exercise on an AiTM credential-phishing chain. Multi-stage attack, detection results across all three SIEMs, coverage gaps, tuning recommendations
- Programme template — coverage matrix, MTTD per technique, FP rates, detection quality scores, remediation backlog. Populated as you work, presentable to leadership by Module 14
- Public Sigma rule repo — every detection rule in a GitHub repository. Alumni contribute via PR. The artefacts outlive the course
Cancel anytime