In this module
PT0.3 The Vocabulary of Coverage
You've reported on detection coverage to leadership — probably as a count of deployed rules, a percentage of ATT&CK techniques covered, or a traffic-light dashboard. You know those numbers feel incomplete. This sub gives you the vocabulary to replace them with metrics that survive scrutiny — each one demonstrated with the actual data, queries, and calculations you'd use in your own environment.
Figure PT0.3 — The six metrics the course tracks. Each is populated per technique as you walk the course.
Metric 1: MTTD — mean time to detect
MTTD is the time from the moment you execute the attack to the moment the first alert appears in the SIEM. Here's a worked example.
You run credential dumping at 14:32:00:
# Attack executed at 14:32:00
procdump.exe -ma lsass.exe C:\Windows\Temp\debug.dmpThe Sentinel analytics rule is scheduled to run every 5 minutes. At 14:37:12, the rule fires:
// The alert that fired
SecurityAlert
| where TimeGenerated == datetime(2026-04-22T14:37:12Z)
| project TimeGenerated, AlertName, AlertSeverityTimeGenerated AlertName AlertSeverity
─────────────────────── ──────────────────────────────── ─────────────
2026-04-22T14:37:12Z LSASS Access - Credential Dump HighMTTD = 14:37:12 - 14:32:00 = 5 minutes 12 seconds.
That number is a fact. The judgement depends on what happens next. If automated isolation triggers on the alert and completes in 30 seconds, total exposure is 5 minutes 42 seconds. If the alert goes to a queue that isn't triaged for four hours, the MTTD is irrelevant — the response time dominates.
You can measure MTTD for any technique with this KQL pattern:
// Measure MTTD — time between technique execution and first alert
let AttackTime = datetime(2026-04-22T14:32:00Z);
SecurityAlert
| where TimeGenerated > AttackTime
| where AlertName has "LSASS" or AlertName has "Credential"
| summarize FirstAlert = min(TimeGenerated)
| extend MTTD = FirstAlert - AttackTime
| project AttackTime, FirstAlert, MTTDAttackTime FirstAlert MTTD
─────────────────────── ─────────────────────── ──────────────
2026-04-22T14:32:00Z 2026-04-22T14:37:12Z 00:05:12The course records this per technique, per variant, per SIEM. By Module 14, you'll answer "how fast do we detect credential dumping" with a number — not a feeling.
Metric 2: Validated coverage percentage
Deployed coverage is the count of rules in your SIEM. Validated coverage is the subset that have been tested against the actual technique in the last 90 days. Here's a worked example.
Your threat model scopes 61 ATT&CK techniques (the same 61 the course covers). After your first quarter of purple-team work:
Programme Coverage — Q1 Assessment
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Techniques in scope: 61
Techniques with deployed rules: 45 (74% deployed)
Techniques tested in last 90 days: 18 (29.5% validated)
Techniques with broken rules (found by test): 7 (15.6% of deployed)
Techniques with no rule at all: 16 (26.2% uncovered)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Validated coverage: 18 ÷ 61 = 29.5%The 74% deployed number is the one most teams report. The 29.5% validated number is the one that's true. The 7 broken rules — rules that exist, show as active, but don't fire — were invisible until testing surfaced them.
The 90-day window is load-bearing. Vendor telemetry shifts, attacker tool evolution, environment divergence, and tuning drift all have a realistic chance of breaking a rule within 90 days. A rule validated last week almost certainly works. A rule validated six months ago is a hypothesis.
Metric 3: Detection quality score
Not all detections are equal. A rule that fires but produces 200 false positives per week is worse than a rule that fires cleanly. Detection quality captures this using three components, each scored 0–5.
Here's a worked example for the LSASS credential dumping rule:
Detection Quality Score — T1003.001 LSASS Memory
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Component 1: Coverage (does the rule fire?)
Catches 5 of 6 known variants (Mimikatz, procdump,
comsvcs.dll, reflective loader, DCSync).
Misses NanoDump (documented in remediation backlog).
Score: 4/5
Component 2: Telemetry richness (what does the alert include?)
Alert includes: SourceImage, TargetImage, GrantedAccess,
SourceUser, DeviceName, TimeGenerated, ProcessId.
Triage analyst can identify the tool, the target, and the
user without pivoting to another table.
Score: 5/5
Component 3: Tuning maturity (has FP noise been managed?)
Two environmental FPs tuned out (VeeamAgent, SCCM client).
One benign TP documented (IT admin procdump for crash dumps).
No blanket exclusions. Tuning reviewed within last 30 days.
But NanoDump gap is open — accepted risk, not tuned.
Score: 3/5
Detection Quality Score: (4 × 5 × 3) ÷ (5 × 5 × 5) × 100 = 48/100
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Interpretation: functional detection with a coverage gap
and room for tuning improvement. Priority: close the
NanoDump gap to raise coverage to 5/5, which lifts the
score to 60/100. Full tuning review lifts it further.A score of 48 means "working but needs attention." A score of 80+ means "solid, maintain it." A score below 20 means "broken or absent — prioritise." The quarterly coverage assessment uses these scores to decide where to spend the next cycle's effort.
Metric 4: False-positive classification
Three categories. Different fix for each. Here's a concrete example of all three from the same LSASS detection rule.
Environmental FP — the backup service.
Your rule fires on this Sysmon Event 10:
{
"EventID": 10,
"SourceImage": "C:\\Program Files\\Veeam\\Backup\\VeeamAgent.exe",
"TargetImage": "C:\\Windows\\System32\\lsass.exe",
"GrantedAccess": "0x1010",
"SourceUser": "NT AUTHORITY\\SYSTEM"
}The rule correctly detects LSASS access. The source is a backup agent performing legitimate credential validation. The rule is right. The environment produces legitimate activity that looks like the attack. Fix: exclude VeeamAgent.exe by full path and hash, scoped narrowly:
// Environmental FP exclusion — Veeam backup agent
| where InitiatingProcessFileName != "VeeamAgent.exe"
or InitiatingProcessFolderPath !startswith
"C:\\Program Files\\Veeam\\Backup\\"Verify the exclusion doesn't suppress a real attack that masquerades as VeeamAgent.exe by checking the path — an attacker dropping a binary named VeeamAgent.exe in C:\Windows\Temp\ wouldn't match the folder path filter.
Rule-logic FP — the overly broad PowerShell rule.
A PowerShell execution detection rule fires on every PowerShell.exe process start:
// Too broad — fires on ALL PowerShell
DeviceProcessEvents
| where FileName =~ "powershell.exe"This produces hundreds of alerts per day — IT automation, login scripts, SCCM. The rule logic is too broad. Fix: tighten the query to match encoded commands and download cradles specifically:
// Tightened — matches attack patterns, not all PowerShell
DeviceProcessEvents
| where FileName =~ "powershell.exe"
| where ProcessCommandLine has_any (
"-enc", "-EncodedCommand", "FromBase64String",
"Net.WebClient", "DownloadString", "Invoke-Expression",
"IEX", "bypass", "-nop"
)Retest after tightening to confirm the real attack variant still fires.
Benign TP — the IT admin running procdump.
Your rule fires on this event:
{
"EventID": 10,
"SourceImage": "C:\\Tools\\procdump64.exe",
"TargetImage": "C:\\Windows\\System32\\lsass.exe",
"GrantedAccess": "0x1FFFFF",
"SourceUser": "NORTHGATE\\admin.jmorris"
}The detection is correct. The activity is real. The user (admin.jmorris) is an IT administrator running procdump to collect a crash dump for a support ticket. This is a benign true positive — the alert is doing exactly what it should. Fix: acknowledge, classify, close. Do not tune it out. The same pattern from NORTHGATE\t.ashworth (a finance user) is the real attack.
Every Tuning Loop element in every technique sub classifies the expected false positives into these three categories and gives you the specific fix for each.
Metric 5: Remediation backlog
Every purple-team cycle produces gaps. The backlog tracks them. Here's a populated entry:
Remediation Backlog Entry
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Technique: T1003.001 — LSASS Memory
Gap: NanoDump variant not detected by current rule
Root cause: NanoDump uses GrantedAccess 0x0040 (PROCESS_DUP_HANDLE)
which bypasses the 0x10/0x1FFFFF filter in the Sigma rule.
Sysmon Event 10 fires but with a different access mask.
Priority: HIGH (NanoDump is actively used in 2026 ransomware
campaigns — M-Trends 2026 reports it in 23% of
credential-access incidents)
Effort: MEDIUM (requires Sysmon config update to capture
GrantedAccess 0x0040 + new Sigma rule variant)
Status: OPEN
Assigned: Week 14 technique cycle
Created: 2026-04-22
Last review: 2026-04-22
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━The entry names the gap, the root cause, the priority (with citation), the effort to fix, and the status. The quarterly review presents the backlog to leadership. The maturity signal isn't zero gaps — it's gaps identified, prioritised, tracked, and communicated.
Metric 6: Programme cadence compliance
Binary per week. Did you run the test or didn't you? Here's what the tracker looks like:
Cadence Tracker — April 2026
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Week Technique Tested Daily Weekly Monthly
────── ──────────────────────── ───── ────── ───────
W14 T1003.001 LSASS Memory 5/5 ✓ —
W15 T1059.001 PowerShell 4/5 ✓ —
W16 T1055.001 DLL Injection 5/5 ✓ —
W17 (production incident) 2/5 ✗ ✓ (chain)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Monthly cadence: April chain emulation completed W17
Weekly compliance: 3/4 (75%) — W17 missed due to incident
Daily compliance: 16/20 (80%)Week 17 was missed because of a production incident. The tracker records it honestly. The quarterly review notes the miss and documents whether a reduced-scope test (a single atomic, 15 minutes) would have been feasible. Cadence compliance is about sustainability, not perfection.
The programme template — what it actually looks like
All six metrics live in a single Excel workbook. Here's the structure with sample data from one technique:
Tab 1: Coverage Matrix
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Technique Env Sentinel Defender XDR Splunk Last Test
───────────── ─────── ──────── ──────────── ────── ──────────
T1003.001 Windows PASS PASS PASS 2026-04-22
T1003.001 AD PASS PASS MISS 2026-04-22
T1003.002 Windows PASS PASS PASS 2026-04-15
T1003.003 AD PASS FAIL N/A 2026-04-08
T1059.001 Windows PASS PASS PASS 2026-04-23
T1059.001 Linux PASS N/A PASS 2026-04-23
Tab 2: MTTD Log
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Technique Variant Sentinel XDR Splunk
───────────── ──────────── ──────── ───── ──────
T1003.001 mimikatz 4s 2s 8s
T1003.001 procdump 4s 3s 9s
T1003.001 comsvcs.dll 5s 2s MISS
T1003.001 NanoDump MISS MISS MISS
Tab 3: FP Classification
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Technique FP Source Type Fix Applied
───────────── ─────────────────── ───────────── ──────────────────
T1003.001 VeeamAgent.exe Environmental Path+hash exclusion
T1003.001 SCCM CcmExec.exe Environmental Parent proc filter
T1003.001 IT admin procdump Benign TP Close, don't tune
T1059.001 Login scripts Rule-logic Tightened to -enc
Tab 4: Detection Quality Scores
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Technique Coverage Telemetry Tuning Score Trend
───────────── ──────── ───────── ────── ───── ─────
T1003.001 4/5 5/5 3/5 48 ↑
T1059.001 5/5 4/5 4/5 64 →
T1078.004 2/5 3/5 1/5 5 NEWBy Module 14, every row has data from your own lab. The template is the artefact you take to your team — populated, evidence-backed, defensible.
You've built the lab and understand the validation gap.
Module 0 showed you why detection rules fail silently — vendor schema changes, attacker tool evolution, environment divergence, tuning drift. Module 1 gave you a working four-environment, three-SIEM purple-team lab. From here, you walk the kill chain technique by technique.
- 61 ATT&CK techniques across 12 tactic modules — Initial Access through Impact, each walked end-to-end with attack commands, annotated telemetry, and multi-SIEM detection rules
- Every detection in four formats — Sigma rule (canonical), Sentinel KQL, Defender XDR Advanced Hunting KQL, and Splunk SPL or Elastic. Tabbed side-by-side in every technique sub
- Module 14 Capstone — CHAIN-HARVEST — full purple-team exercise on an AiTM credential-phishing chain. Multi-stage attack, detection results across all three SIEMs, coverage gaps, tuning recommendations
- Programme template — coverage matrix, MTTD per technique, FP rates, detection quality scores, remediation backlog. Populated as you work, presentable to leadership by Module 14
- Public Sigma rule repo — every detection rule in a GitHub repository. Alumni contribute via PR. The artefacts outlive the course
Cancel anytime