In this module
PT0.1 How Real Incidents Actually Unfold — and What Your Rules Miss
You write or maintain detection rules. You've deployed Sigma or KQL rules to a SIEM. You've seen alerts fire and you've triaged them. This sub doesn't teach you what a detection rule is. It shows you what happens when you stop trusting the rule and start testing it — and why that shift is the entire course.
Figure PT0.1 — The coverage gap. A typical enterprise SIEM has 200+ deployed rules. Fewer than 30 have been tested against the actual technique in the last 90 days. The gap is where the attacker operates.
The Tuesday afternoon question
It's Tuesday afternoon. The IT director walks past your desk and asks the question that always lands at the wrong time.
"If someone ran Mimikatz against one of our domain controllers right now, would we catch it?"
You say yes. There's a Sigma rule in the repo. There's a Sentinel analytics rule deployed. The director walks off. Three minutes later you realise you don't actually know — you've never run Mimikatz on a system you own and watched what fires.
This sub shows you exactly why that matters. Three coverage claims. Three concrete failures. Actual telemetry, actual rules, actual queries.
Claim one: "Our credential dumping detection is solid"
The team caught Mimikatz in a pen test six months ago. The Sigma rule has been in production since. Here's the rule:
title: LSASS Memory Access - Mimikatz
id: 0d894093-71bc-43c3-8985-4513b67d0b6b
status: stable
logsource:
category: process_access
product: windows
detection:
selection:
TargetImage|endswith: '\lsass.exe'
SourceImage|endswith: '\mimikatz.exe'
GrantedAccess:
- '0x1010'
- '0x1410'
condition: selection
level: critical
tags:
- attack.credential_access
- attack.t1003.001The rule matches when mimikatz.exe opens lsass.exe with specific access rights. It works — against Mimikatz.
Now here's what actually happens in the real world. An attacker runs credential dumping using procdump, a signed Microsoft Sysinternals binary that's already on many endpoints:
procdump.exe -ma lsass.exe C:\Windows\Temp\debug.dmpSysmon Event 10 fires for both attacks. But look at the difference. Here's the event when Mimikatz runs:
{
"EventID": 10,
"SourceImage": "C:\\Users\\attacker\\mimikatz.exe",
"TargetImage": "C:\\Windows\\System32\\lsass.exe",
"GrantedAccess": "0x1010",
"SourceUser": "NORTHGATE\\t.ashworth"
}The Sigma rule matches. SourceImage ends with \mimikatz.exe. Alert fires. Detection works.
Now here's the event when procdump runs the exact same credential theft:
{
"EventID": 10,
"SourceImage": "C:\\Windows\\Temp\\procdump64.exe",
"TargetImage": "C:\\Windows\\System32\\lsass.exe",
"GrantedAccess": "0x1FFFFF",
"SourceUser": "NORTHGATE\\t.ashworth"
}Same target — lsass.exe. Same attacker. Same stolen credentials. But SourceImage is procdump64.exe, not mimikatz.exe. The GrantedAccess is 0x1FFFFF (full access), not 0x1010. The Sigma rule doesn't match. No alert. The dashboard stays green. The credentials are gone.
Here's what the rule should look like to catch both variants — and four others. The tabs below show the same detection logic in every format the course covers:
title: LSASS Memory Access - Credential Dumping (Multi-Variant)
id: a4f2c8e0-91d7-4b5a-8c3e-6d9f0e1a2b34
status: stable
logsource:
category: process_access
product: windows
detection:
selection:
TargetImage|endswith: '\lsass.exe'
GrantedAccess|contains:
- '0x10'
- '0x1FFFFF'
filter_system:
SourceImage|startswith:
- 'C:\Windows\System32\'
- 'C:\Program Files\Windows Defender\'
condition: selection and not filter_system
level: critical
tags:
- attack.credential_access
- attack.t1003.001
// Sentinel KQL — LSASS Access (Multi-Variant)
// Table: DeviceProcessEvents (via MDE connector)
DeviceProcessEvents
| where TimeGenerated > ago(1h)
| where FileName =~ "lsass.exe"
| where InitiatingProcessFileName !in~ (
"MsMpEng.exe", "csrss.exe", "services.exe", "svchost.exe"
)
| where InitiatingProcessFolderPath !startswith "C:\\Windows\\System32\\"
| project TimeGenerated, DeviceName, InitiatingProcessFileName,
InitiatingProcessCommandLine, AccountName
// Defender XDR Advanced Hunting — LSASS Access (Multi-Variant)
// Note: Timestamp instead of TimeGenerated
DeviceProcessEvents
| where Timestamp > ago(1h)
| where FileName == "lsass.exe"
| where InitiatingProcessFileName !in~ (
"MsMpEng.exe", "csrss.exe", "services.exe", "svchost.exe"
)
| where InitiatingProcessFolderPath !startswith "C:\\Windows\\System32\\"
| project Timestamp, DeviceName, InitiatingProcessFileName,
InitiatingProcessCommandLine, AccountName
index=windows sourcetype="XmlWinEventLog:Microsoft-Windows-Sysmon/Operational"
EventCode=10 TargetImage="*\\lsass.exe"
| where NOT match(SourceImage, "^C:\\\\Windows\\\\System32\\\\")
| where NOT match(SourceImage, "^C:\\\\Program Files\\\\Windows Defender\\\\")
| table _time, Computer, SourceImage, GrantedAccess, SourceUser
| sort - _time
This is the pattern every technique sub follows: one detection logic, four platform conversions in tabs. The Sigma rule is canonical — the platform queries are conversions. Click each tab to see how the same detection looks in your SIEM.
You wouldn't know which rule you had without testing both variants. That's the gap.
Claim two: "We have AiTM detection"
After the credential phishing news cycle, someone wrote this KQL rule for Sentinel:
// AiTM Detection - Impossible Travel with New Device
SigninLogs
| where TimeGenerated > ago(1h)
| where ResultType == 0
| where RiskLevelDuringSignIn == "none"
| where DeviceDetail_deviceId == ""
| where Location != PreviousLocation
| project TimeGenerated, UserPrincipalName, Location, IPAddress,
DeviceDetail_deviceId, AppDisplayNameThe rule has been deployed for four months. It's been quiet. Here's the problem: Microsoft changed a field in February. DeviceDetail_deviceId moved to DeviceDetail.deviceId (nested property access). The old field still exists in the schema but always returns empty. The rule runs. It never matches. The dashboard shows it as active.
Here's the before-and-after:
// BEFORE (broken after February schema update)
| where DeviceDetail_deviceId == ""
// This now ALWAYS returns empty — so this filter matches EVERYTHING
// The subsequent Location filter then excludes everything legitimate
// Net result: zero matches, zero alerts, zero coverage
// AFTER (fixed)
| where tostring(DeviceDetail.deviceId) == ""
// Correct nested property access — matches AiTM sign-ins
// where no device is registered (the attacker's replayed session)One character difference. The rule was working in December. It broke in February. Nobody noticed because the dashboard showed "deployed and active." The only way to find this: fire an AiTM attack against your dev tenant and check whether the alert appears.
Claim three: "We have ransomware coverage"
The team has a Sigma rule for T1486 that matches on file extension changes. The rule works — it fires when files start getting renamed to .encrypted, .locked, .crypted. But look at the timeline of an actual ransomware attack:
14:31 Initial access — phishing email, user clicks ← no rule fires
14:36 Credential theft — LSASS dumped on user workstation ← no rule fires
15:14 Lateral movement — attacker moves to file server ← no rule fires
15:28 Discovery — attacker maps the share ← no rule fires
15:41 Staging — attacker archives target files ← no rule fires
15:52 Exfiltration — archive sent to C2 ← no rule fires
16:01 Encryption begins — ransomware binary executes ← no rule fires
16:04 File extensions change ← RULE FIRES HERE
16:08 Encryption complete — ransom note droppedThe rule fires at 16:04. By then: credentials stolen ninety minutes ago, attacker on the file server for fifty minutes, data already exfiltrated, encryption three minutes from completion. The alert is real. The coverage is real. But the useful detection window — where containment could have prevented data loss — was between 14:36 and 15:52. The rule covers none of that window.
A purple-team exercise surfaces this by walking the chain step by step. You fire each technique in sequence and check after each: did any rule fire? For most teams, steps 1 through 7 produce zero alerts. Step 8 produces one — and by then it's too late.
Why rules break silently
The three claims above are symptoms. The systemic causes are predictable:
Vendor telemetry shifts. The AiTM example — DeviceDetail_deviceId becomes DeviceDetail.deviceId. The rule loads, runs, matches nothing.
Attacker tools evolve. The credential dumping example — mimikatz.exe becomes procdump64.exe. Same technique, different Sysmon Event 10 fields.
Environments diverge. A rule's exception matches DC*. Production has MELDC1. The rule fires on every legitimate replication event. The SOC mutes it. Coverage: zero.
Telemetry sources break. The Azure Monitor Agent stops forwarding after a patch. The rule has no events to match on.
Tuning was done once. New software produces the same pattern. Someone adds a blanket exclusion that also excludes the real attack.
Each failure is invisible on the dashboard. Each becomes visible the moment you fire the attack.
What changes when you test
When you reach Module 7, you'll run all six LSASS dumping variants against your own lab. You'll see events like the ones above in your own Sentinel workspace. You'll write the multi-variant Sigma rule. You'll convert it to KQL:
// Sentinel KQL — LSASS Access (Multi-Variant)
DeviceProcessEvents
| where TimeGenerated > ago(1h)
| where FileName =~ "lsass.exe"
| where InitiatingProcessFileName !in~ (
"MsMpEng.exe", "csrss.exe", "services.exe", "svchost.exe"
)
| project TimeGenerated, DeviceName, InitiatingProcessFileName,
InitiatingProcessCommandLine, AccountNameYou'll tune it. You'll log the result. And your answer to the IT director will change from "yes, we have a rule" to "yes — I tested six variants this month, five fire in under 5 seconds, one is in the remediation backlog. Here's the report."
Same question. Different answer. The difference is evidence.
You've built the lab and understand the validation gap.
Module 0 showed you why detection rules fail silently — vendor schema changes, attacker tool evolution, environment divergence, tuning drift. Module 1 gave you a working four-environment, three-SIEM purple-team lab. From here, you walk the kill chain technique by technique.
- 61 ATT&CK techniques across 12 tactic modules — Initial Access through Impact, each walked end-to-end with attack commands, annotated telemetry, and multi-SIEM detection rules
- Every detection in four formats — Sigma rule (canonical), Sentinel KQL, Defender XDR Advanced Hunting KQL, and Splunk SPL or Elastic. Tabbed side-by-side in every technique sub
- Module 14 Capstone — CHAIN-HARVEST — full purple-team exercise on an AiTM credential-phishing chain. Multi-stage attack, detection results across all three SIEMs, coverage gaps, tuning recommendations
- Programme template — coverage matrix, MTTD per technique, FP rates, detection quality scores, remediation backlog. Populated as you work, presentable to leadership by Module 14
- Public Sigma rule repo — every detection rule in a GitHub repository. Alumni contribute via PR. The artefacts outlive the course
Cancel anytime