In this module

PT0.1 How Real Incidents Actually Unfold — and What Your Rules Miss

10 minutes · Module 0 · Free
What you already know

You write or maintain detection rules. You've deployed Sigma or KQL rules to a SIEM. You've seen alerts fire and you've triaged them. This sub doesn't teach you what a detection rule is. It shows you what happens when you stop trusting the rule and start testing it — and why that shift is the entire course.

Operational Objective
Your team has detection rules. The dashboard shows them as deployed and active. Leadership asks whether you'd catch a specific attack. You say yes because the rule exists. But you've never fired the attack against the rule and watched what happens. The gap between "the rule exists" and "the rule fires against the actual technique" is where breaches live. This sub makes that gap concrete — with actual telemetry, actual rules, and actual queries — by walking three techniques where the coverage claim is real but the detection is broken.
Deliverable: A concrete understanding of why detection rules fail silently, demonstrated with real Sysmon events, Sigma rules, and KQL queries you can compare against your own environment.
Estimated completion: 20 minutes
THE COVERAGE GAP — DEPLOYED vs VALIDATED WHAT THE DASHBOARD SHOWS ✓ Credential dumping rule deployed ✓ Ransomware detection active ✓ AiTM rule deployed — no alerts Status: deployed. Last modified: 14 months ago. WHAT HAPPENS WHEN YOU TEST ✗ Credential rule catches 1 of 6 variants ✗ Ransomware fires after encryption ✗ AiTM rule broken — field changed Feb Status: deployed and broken. Nobody knows. test 200+ rules deployed typical enterprise SIEM <30 rules validated tested in last 90 days 10-30% actual coverage the gap the attacker lives in The gap between deployed and validated is where breaches live. Purple teaming closes it by testing every rule against the actual technique.

Figure PT0.1 — The coverage gap. A typical enterprise SIEM has 200+ deployed rules. Fewer than 30 have been tested against the actual technique in the last 90 days. The gap is where the attacker operates.

The Tuesday afternoon question

It's Tuesday afternoon. The IT director walks past your desk and asks the question that always lands at the wrong time.

"If someone ran Mimikatz against one of our domain controllers right now, would we catch it?"

You say yes. There's a Sigma rule in the repo. There's a Sentinel analytics rule deployed. The director walks off. Three minutes later you realise you don't actually know — you've never run Mimikatz on a system you own and watched what fires.

This sub shows you exactly why that matters. Three coverage claims. Three concrete failures. Actual telemetry, actual rules, actual queries.

Claim one: "Our credential dumping detection is solid"

The team caught Mimikatz in a pen test six months ago. The Sigma rule has been in production since. Here's the rule:

title: LSASS Memory Access - Mimikatz
id: 0d894093-71bc-43c3-8985-4513b67d0b6b
status: stable
logsource:
    category: process_access
    product: windows
detection:
    selection:
        TargetImage|endswith: '\lsass.exe'
        SourceImage|endswith: '\mimikatz.exe'
        GrantedAccess:
            - '0x1010'
            - '0x1410'
    condition: selection
level: critical
tags:
    - attack.credential_access
    - attack.t1003.001

The rule matches when mimikatz.exe opens lsass.exe with specific access rights. It works — against Mimikatz.

Now here's what actually happens in the real world. An attacker runs credential dumping using procdump, a signed Microsoft Sysinternals binary that's already on many endpoints:

procdump.exe -ma lsass.exe C:\Windows\Temp\debug.dmp

Sysmon Event 10 fires for both attacks. But look at the difference. Here's the event when Mimikatz runs:

{
  "EventID": 10,
  "SourceImage": "C:\\Users\\attacker\\mimikatz.exe",
  "TargetImage": "C:\\Windows\\System32\\lsass.exe",
  "GrantedAccess": "0x1010",
  "SourceUser": "NORTHGATE\\t.ashworth"
}

The Sigma rule matches. SourceImage ends with \mimikatz.exe. Alert fires. Detection works.

Now here's the event when procdump runs the exact same credential theft:

{
  "EventID": 10,
  "SourceImage": "C:\\Windows\\Temp\\procdump64.exe",
  "TargetImage": "C:\\Windows\\System32\\lsass.exe",
  "GrantedAccess": "0x1FFFFF",
  "SourceUser": "NORTHGATE\\t.ashworth"
}

Same target — lsass.exe. Same attacker. Same stolen credentials. But SourceImage is procdump64.exe, not mimikatz.exe. The GrantedAccess is 0x1FFFFF (full access), not 0x1010. The Sigma rule doesn't match. No alert. The dashboard stays green. The credentials are gone.

Here's what the rule should look like to catch both variants — and four others. The tabs below show the same detection logic in every format the course covers:

title: LSASS Memory Access - Credential Dumping (Multi-Variant)
id: a4f2c8e0-91d7-4b5a-8c3e-6d9f0e1a2b34
status: stable
logsource:
    category: process_access
    product: windows
detection:
    selection:
        TargetImage|endswith: '\lsass.exe'
        GrantedAccess|contains:
            - '0x10'
            - '0x1FFFFF'
    filter_system:
        SourceImage|startswith:
            - 'C:\Windows\System32\'
            - 'C:\Program Files\Windows Defender\'
    condition: selection and not filter_system
level: critical
tags:
    - attack.credential_access
    - attack.t1003.001
// Sentinel KQL — LSASS Access (Multi-Variant)
// Table: DeviceProcessEvents (via MDE connector)
DeviceProcessEvents
| where TimeGenerated > ago(1h)
| where FileName =~ "lsass.exe"
| where InitiatingProcessFileName !in~ (
    "MsMpEng.exe", "csrss.exe", "services.exe", "svchost.exe"
  )
| where InitiatingProcessFolderPath !startswith "C:\\Windows\\System32\\"
| project TimeGenerated, DeviceName, InitiatingProcessFileName,
          InitiatingProcessCommandLine, AccountName
// Defender XDR Advanced Hunting — LSASS Access (Multi-Variant)
// Note: Timestamp instead of TimeGenerated
DeviceProcessEvents
| where Timestamp > ago(1h)
| where FileName == "lsass.exe"
| where InitiatingProcessFileName !in~ (
    "MsMpEng.exe", "csrss.exe", "services.exe", "svchost.exe"
  )
| where InitiatingProcessFolderPath !startswith "C:\\Windows\\System32\\"
| project Timestamp, DeviceName, InitiatingProcessFileName,
          InitiatingProcessCommandLine, AccountName
index=windows sourcetype="XmlWinEventLog:Microsoft-Windows-Sysmon/Operational"
    EventCode=10 TargetImage="*\\lsass.exe"
| where NOT match(SourceImage, "^C:\\\\Windows\\\\System32\\\\")
| where NOT match(SourceImage, "^C:\\\\Program Files\\\\Windows Defender\\\\")
| table _time, Computer, SourceImage, GrantedAccess, SourceUser
| sort - _time

This is the pattern every technique sub follows: one detection logic, four platform conversions in tabs. The Sigma rule is canonical — the platform queries are conversions. Click each tab to see how the same detection looks in your SIEM.

You wouldn't know which rule you had without testing both variants. That's the gap.

Claim two: "We have AiTM detection"

After the credential phishing news cycle, someone wrote this KQL rule for Sentinel:

// AiTM Detection - Impossible Travel with New Device
SigninLogs
| where TimeGenerated > ago(1h)
| where ResultType == 0
| where RiskLevelDuringSignIn == "none"
| where DeviceDetail_deviceId == ""
| where Location != PreviousLocation
| project TimeGenerated, UserPrincipalName, Location, IPAddress,
          DeviceDetail_deviceId, AppDisplayName

The rule has been deployed for four months. It's been quiet. Here's the problem: Microsoft changed a field in February. DeviceDetail_deviceId moved to DeviceDetail.deviceId (nested property access). The old field still exists in the schema but always returns empty. The rule runs. It never matches. The dashboard shows it as active.

Here's the before-and-after:

// BEFORE (broken after February schema update)
| where DeviceDetail_deviceId == ""
// This now ALWAYS returns empty — so this filter matches EVERYTHING
// The subsequent Location filter then excludes everything legitimate
// Net result: zero matches, zero alerts, zero coverage

// AFTER (fixed)
| where tostring(DeviceDetail.deviceId) == ""
// Correct nested property access — matches AiTM sign-ins
// where no device is registered (the attacker's replayed session)

One character difference. The rule was working in December. It broke in February. Nobody noticed because the dashboard showed "deployed and active." The only way to find this: fire an AiTM attack against your dev tenant and check whether the alert appears.

Claim three: "We have ransomware coverage"

The team has a Sigma rule for T1486 that matches on file extension changes. The rule works — it fires when files start getting renamed to .encrypted, .locked, .crypted. But look at the timeline of an actual ransomware attack:

14:31  Initial access — phishing email, user clicks           ← no rule fires
14:36  Credential theft — LSASS dumped on user workstation     ← no rule fires
15:14  Lateral movement — attacker moves to file server        ← no rule fires
15:28  Discovery — attacker maps the share                     ← no rule fires
15:41  Staging — attacker archives target files                ← no rule fires
15:52  Exfiltration — archive sent to C2                       ← no rule fires
16:01  Encryption begins — ransomware binary executes          ← no rule fires
16:04  File extensions change                                  ← RULE FIRES HERE
16:08  Encryption complete — ransom note dropped

The rule fires at 16:04. By then: credentials stolen ninety minutes ago, attacker on the file server for fifty minutes, data already exfiltrated, encryption three minutes from completion. The alert is real. The coverage is real. But the useful detection window — where containment could have prevented data loss — was between 14:36 and 15:52. The rule covers none of that window.

A purple-team exercise surfaces this by walking the chain step by step. You fire each technique in sequence and check after each: did any rule fire? For most teams, steps 1 through 7 produce zero alerts. Step 8 produces one — and by then it's too late.

Why rules break silently

The three claims above are symptoms. The systemic causes are predictable:

Vendor telemetry shifts. The AiTM example — DeviceDetail_deviceId becomes DeviceDetail.deviceId. The rule loads, runs, matches nothing.

Attacker tools evolve. The credential dumping example — mimikatz.exe becomes procdump64.exe. Same technique, different Sysmon Event 10 fields.

Environments diverge. A rule's exception matches DC*. Production has MELDC1. The rule fires on every legitimate replication event. The SOC mutes it. Coverage: zero.

Telemetry sources break. The Azure Monitor Agent stops forwarding after a patch. The rule has no events to match on.

Tuning was done once. New software produces the same pattern. Someone adds a blanket exclusion that also excludes the real attack.

Each failure is invisible on the dashboard. Each becomes visible the moment you fire the attack.

What changes when you test

When you reach Module 7, you'll run all six LSASS dumping variants against your own lab. You'll see events like the ones above in your own Sentinel workspace. You'll write the multi-variant Sigma rule. You'll convert it to KQL:

// Sentinel KQL — LSASS Access (Multi-Variant)
DeviceProcessEvents
| where TimeGenerated > ago(1h)
| where FileName =~ "lsass.exe"
| where InitiatingProcessFileName !in~ (
    "MsMpEng.exe", "csrss.exe", "services.exe", "svchost.exe"
  )
| project TimeGenerated, DeviceName, InitiatingProcessFileName,
          InitiatingProcessCommandLine, AccountName

You'll tune it. You'll log the result. And your answer to the IT director will change from "yes, we have a rule" to "yes — I tested six variants this month, five fire in under 5 seconds, one is in the remediation backlog. Here's the report."

Same question. Different answer. The difference is evidence.

Next
PT0.2 — The Purple-Team Mindset. You've seen the gap. PT0.2 explains why it exists as a systemic problem and introduces the continuous rhythm — daily, weekly, monthly, quarterly — that keeps coverage current.

You've built the lab and understand the validation gap.

Module 0 showed you why detection rules fail silently — vendor schema changes, attacker tool evolution, environment divergence, tuning drift. Module 1 gave you a working four-environment, three-SIEM purple-team lab. From here, you walk the kill chain technique by technique.

  • 61 ATT&CK techniques across 12 tactic modules — Initial Access through Impact, each walked end-to-end with attack commands, annotated telemetry, and multi-SIEM detection rules
  • Every detection in four formats — Sigma rule (canonical), Sentinel KQL, Defender XDR Advanced Hunting KQL, and Splunk SPL or Elastic. Tabbed side-by-side in every technique sub
  • Module 14 Capstone — CHAIN-HARVEST — full purple-team exercise on an AiTM credential-phishing chain. Multi-stage attack, detection results across all three SIEMs, coverage gaps, tuning recommendations
  • Programme template — coverage matrix, MTTD per technique, FP rates, detection quality scores, remediation backlog. Populated as you work, presentable to leadership by Module 14
  • Public Sigma rule repo — every detection rule in a GitHub repository. Alumni contribute via PR. The artefacts outlive the course
Unlock with Specialist — £25/mo See Full Syllabus

Cancel anytime