In this section
TH0.6 Hunting, IR, and Detection Engineering
Three disciplines, one data lake
The tools overlap. Sentinel is the workspace for detection engineering (analytics rules), hunting (hunting queries), and incident response (incident investigation). KQL is the query language for all three. The Defender XDR portal serves all three. An analyst may write a detection rule in the morning, investigate an incident in the afternoon, and run a hunt campaign on Friday.
The overlap is why organizations confuse them — if the same person uses the same tool to query the same data, what is the difference? The difference is trigger, method, and output.
What triggers each
// Incident source distribution — which disciplines are producing findings?
SecurityIncident
| where TimeGenerated > ago(180d)
| extend IncidentSource = case(
tostring(AdditionalData) has "Scheduled", "Detection Rule (Automated)",
tostring(AdditionalData) has "Fusion", "Fusion/ML Correlation",
tostring(AdditionalData) has "NRT", "Near Real-Time Rule",
"Built-in / Other")
| summarize
IncidentCount = count(),
AvgSeverity = avg(case(
Severity == "High", 3,
Severity == "Medium", 2,
Severity == "Low", 1, 0))
by IncidentSource
| sort by IncidentCount desc
// If 100% of incidents come from detection rules, your triad has one active leg
// Incidents discovered by hunting appear as manually created incidents
// or do not appear here at all (they may be in hunt records, not Sentinel)
// The gap between what rules find and what exists = hunting surfaceTry it yourself
Exercise: Trace a real handoff in your organization
Pick a recent closed incident in your Sentinel workspace. Answer these questions:
Detection → IR (handoff 1): Which detection rule triggered the incident? Was the rule a built-in Defender XDR detection or a custom analytics rule?
IR → Detection (handoff 2): During the investigation, did the analyst discover any technique the attacker used that was NOT detected by any rule? If yes, was a detection engineering backlog item created? If not, the handoff failed — the gap persists.
IR → Hunting (handoff 5): Did the investigation raise any questions about wider scope — other accounts, other time windows, other systems that may have been affected but were not included in the original alert? If yes, was a hunt conducted to answer those questions? If not, the wider scope remains unknown.
If handoffs 2 and 5 did not happen, your triad is operating as a single discipline (IR) with no feedback into detection improvement or proactive hunting. That is the gap this course and the broader Ridgeline curriculum address.
The myth: Hunting and IR are the same thing. If the analyst is querying data and looking for threats, they are doing IR. Hunting is just a buzzword for proactive investigation.
The reality: Hunting and IR share tools and data sources but differ in trigger, method, scope, and output. IR is triggered by an alert and scoped to a specific incident — the investigation follows the evidence of a known compromise. Hunting is triggered by a hypothesis and scoped by the hunter — the investigation explores a threat category across the environment without evidence that a compromise has occurred. IR produces incident findings and containment actions. Hunting produces findings, detection rules, and environmental understanding. Confusing them leads to under-investment in both: the SOC "hunts" only when investigating incidents (IR) and never conducts structured, hypothesis-driven campaigns against the unknown-known layer.
Extend this model
Some organizations add a fourth discipline to the triad: threat intelligence. TI provides the external context — what attackers are doing to organizations like yours — that feeds hypothesis generation for hunting and technique identification for detection engineering. The SOC Operations course (Module S12) covers TI operations in depth, including the TI-to-detection pipeline and TI-driven hunting. For this course, TI is treated as an input to hunting rather than a separate discipline, but in organizations with dedicated TI analysts, the four-discipline model (TI → Detection Engineering → Hunting → IR → TI) creates an even tighter reinforcing loop.
References Used in This Subsection
- MITRE ATT&CK Techniques referenced: T1098.003 (Additional Cloud Roles)
- Course cross-references: Mastering KQL (KQL foundation), SOC Operations Module S12 (TI operations), Practical IR (Six-Step Investigation Method)
You have time for one hunt this quarter. Do you hunt for the threat in the latest advisory or for the gap in your ATT&CK coverage matrix?
Hunt the coverage gap. Advisories describe threats that are CURRENT but may not target NE. Coverage gaps describe techniques that COULD target NE and would succeed undetected. The coverage gap hunt produces a detection rule (closing the gap permanently). The advisory-driven hunt produces a point-in-time assessment (confirming the specific threat is not present today). Both are valuable — but the coverage gap hunt has a longer-lasting impact because it produces a permanent detection improvement.
You understand the detection gap and the hunt cycle.
TH0 showed you what detection rules fundamentally cannot catch. TH1 gave you the hypothesis-driven methodology that closes that gap. Now you run the hunts.
- 10 complete hunt campaigns — from hypothesis through KQL execution through finding disposition, each campaign based on a real TTP
- 70 production hunt queries — every one mapped to MITRE ATT&CK and tested against realistic telemetry
- Advanced KQL for hunting — UEBA composite risk scoring, retroactive IOC sweeps, and hunt management metrics
- Hypothesis-Driven Hunt Toolkit lab pack — 30 days of realistic M365 and endpoint telemetry with multiple attack patterns seeded in
- TH16 — Scaling hunts across a team — the operating model for a production hunt program