In this section
TH0.3 The Detection Pyramid
Three categories of threat
Not all threats are the same, and the operational response to each category is fundamentally different. Confusing these categories — treating all threats as if they were the same kind of problem — is one of the most common strategic errors in security operations. It leads to organizations that invest heavily in detection engineering and then cannot understand why sophisticated attacks still succeed.
Known-known threats. The attack technique is documented. The telemetry that records it is identified. A detection rule exists and is deployed. When the attack occurs, an alert fires. This is the domain of detection engineering — and it is the only category that automated detection covers.
// Which pyramid layers are producing findings?
SecurityAlert
| where TimeGenerated > ago(90d)
| extend DetectionLayer = case(
ProviderName == "ASI Scheduled Alerts", "Known-Known: Custom Rules",
ProviderName has "MCAS", "Known-Known: Defender for Cloud Apps",
ProviderName has "MDATP", "Known-Known: Defender for Endpoint",
ProviderName has "IPC", "Unknown-Unknown: Identity Protection (UEBA)",
ProviderName has "Azure Sentinel" and AlertName has "Anomal",
"Unknown-Unknown: Anomaly Detection",
"Known-Known: Built-in Detection")
| summarize AlertCount = count(), DistinctAlerts = dcount(AlertName)
by DetectionLayer
| sort by AlertCount desc
// If "Unknown-Unknown" layers return zero or near-zero results,
// your anomaly detection layer is either not deployed or not producing
// If all results are "Known-Known" — only the base of the pyramid is active
// The Known-Unknown layer (hunting) will never appear here because
// hunting does not produce automated alerts — it produces findings
// that are documented in hunt records, not in the SecurityAlert tableTry it yourself
Exercise: Map your current investment across the pyramid
Categorize your SOC's current operational activities:
Known-known (detection engineering): How many analytics rules are deployed? How many were created or updated in the last 90 days? Is there a detection engineering backlog? How many hours per week are dedicated to building and maintaining rules?
Known-unknown (hunting): How many structured hunt campaigns were executed in the last 90 days? Is there a hunt backlog? Are hunt hours protected from the alert queue? Do hunts produce documented findings and detection rules?
Unknown-unknown (anomaly detection): Are behavioral baselines deployed? Do you use UEBA (User and Entity Behavior Analytics)? Are anomaly-flagged events reviewed by an analyst, or do they sit in a workbook nobody checks?
If the answer to most of the hunting and anomaly detection questions is "no" or "not systematically" — you are operating on one layer of the pyramid. That is the gap this course addresses.
The myth: User and Entity Behavior Analytics (UEBA) is threat hunting. Enabling UEBA in Sentinel means you have a hunting capability.
The reality: UEBA operates in the unknown-unknown layer — it flags behavioral anomalies based on statistical deviation from baselines. It does not formulate hypotheses, it does not investigate its own findings, and it does not produce detection rules. UEBA flags anomalies. A human must investigate those anomalies to determine whether they indicate compromise. That investigation is hunting. Without an analyst reviewing UEBA anomalies, investigating the flagged behavior, and documenting findings, UEBA is an unread report — data without action. UEBA is a valuable input to hunting, not a replacement for it.
Extend this model
The detection pyramid applies to any security operation, not only M365. If your organization runs multiple SIEM platforms or EDR solutions, the three layers exist in each — and the gaps are often in the same place (hunting and anomaly detection under-resourced relative to detection engineering). The pyramid is a useful framework for any conversation about security operations investment because it makes the resource imbalance visible: ask leadership to estimate what percentage of their security budget funds each layer. The answer is usually 80%+ on the base (tooling, detection engineering, alert triage) and near-zero on the middle and top.
References Used in This Subsection
- MITRE Corporation. "MITRE ATT&CK — Enterprise Matrix." https://attack.mitre.org
- Microsoft. "Microsoft Sentinel UEBA — User and Entity Behavior Analytics." Microsoft Learn. https://learn.microsoft.com/en-us/azure/sentinel/identify-threats-with-entity-behavior-analytics
- Microsoft. "Kusto Query Language — series_decompose_anomalies()." Microsoft Learn. https://learn.microsoft.com/en-us/kusto/query/series-decompose-anomalies-function
- Sqrrl (now Amazon). "A Framework for Cyber Threat Hunting." — foundational hunting methodology reference
You have time for one hunt this quarter. Do you hunt for the threat in the latest advisory or for the gap in your ATT&CK coverage matrix?
Hunt the coverage gap. Advisories describe threats that are CURRENT but may not target NE. Coverage gaps describe techniques that COULD target NE and would succeed undetected. The coverage gap hunt produces a detection rule (closing the gap permanently). The advisory-driven hunt produces a point-in-time assessment (confirming the specific threat is not present today). Both are valuable — but the coverage gap hunt has a longer-lasting impact because it produces a permanent detection improvement.
You understand the detection gap and the hunt cycle.
TH0 showed you what detection rules fundamentally cannot catch. TH1 gave you the hypothesis-driven methodology that closes that gap. Now you run the hunts.
- 10 complete hunt campaigns — from hypothesis through KQL execution through finding disposition, each campaign based on a real TTP
- 70 production hunt queries — every one mapped to MITRE ATT&CK and tested against realistic telemetry
- Advanced KQL for hunting — UEBA composite risk scoring, retroactive IOC sweeps, and hunt management metrics
- Hypothesis-Driven Hunt Toolkit lab pack — 30 days of realistic M365 and endpoint telemetry with multiple attack patterns seeded in
- TH16 — Scaling hunts across a team — the operating model for a production hunt program