TH1.16 Module Summary
Module Summary: The Hunt Cycle
This module defined the six-step methodology that every campaign module in this course follows. The Hunt Cycle is not a theoretical framework — it is the operational process you execute every time you hunt.
The six steps in practice
1. Hypothesize (TH1.1). Formulate a specific, testable, grounded, actionable hypothesis. Six sources: ATT&CK coverage gaps, prior incident findings, threat intelligence, environmental changes, detection rule failures, and peer community sharing. The hypothesis formula: “If [attacker behavior], then [data source] will contain [observable indicator] that differs from [baseline].” Three failure modes to avoid: too broad, untestable with available data, unfalsifiable.
2. Scope (TH1.2). Define four dimensions before the first query: data sources, time window, population, and success criteria. Confirm data sources are populated. Set baseline and detection windows separately for behavioral comparison hunts. Start broad, narrow on noise. Avoid both scope traps: too broad (partial results everywhere, completed analysis nowhere) and too narrow (missed variants from restricted detection surface).
3. Collect (TH1.3). Execute queries iteratively. Four-step funnel: orientation (understand the data landscape), indicator (test the hypothesis), enrichment (add context to suspicious results), pivot (expand scope on confirmed leads). Document every query — not just the ones that produced results. Manage Advanced Hunting limits: 10-minute timeout, 10,000-row limit, 30-day lookback.
4. Analyze (TH1.4). Separate signal from noise using five enrichment dimensions: user context, temporal context, geographic context, behavioral context, and correlated indicators. Confidence model: 1 dimension = indicator, 2 dimensions = suspicious, 3+ dimensions = high-confidence finding. Never escalate on a single dimension. Never rationalize away correlated anomalies.
5. Conclude (TH1.5). Three outcomes: confirmed (escalate to IR with evidence package), refuted (document negative finding — this has value), inconclusive (document ambiguity, identify the gap, refine and re-queue). Write the conclusion explicitly. An undocumented hunt never happened.
6. Convert (TH1.6). Turn validated hunt queries into Sentinel analytics rules. Adapt time window, add thresholds from hunt data, configure exclusions from false positive analysis, map entities, set severity, and deploy through your detection engineering process. 14-day validation period. The technique moves from the known-unknown layer to the known-known layer permanently.
The hunt documentation standard (TH1.7)
Every hunt produces a record with seven sections: header, hypothesis, scope, collection (query log), analysis (enrichment table), conclusion, and detection conversion. Complete incrementally as you work through the Hunt Cycle, not retrospectively. Store in a shared location. The record is the output that makes hunting an organizational capability rather than individual effort.
Supporting disciplines (TH1.8–TH1.15)
Hypothesis prioritization (TH1.8). Three-dimension scoring model: threat relevance × data availability × detection gap severity. Composite score 1–27. Four priority bands: hunt now (18–27), this quarter (8–17), when capacity (3–7), defer (1–2). The backlog is a living document — add, promote, retire, complete.
Multi-table correlation (TH1.9). Six join patterns that connect evidence across data source boundaries: auth + directory, auth + cloud app, email + auth, cloud + endpoint, interactive + non-interactive, audit + service principal. Pre-filter before joining. Use kind=inner for most hunting joins. Avoid high-cardinality column joins.
Behavioral baselining (TH1.10). Per-entity baselines constructed from historical data. The gap window prevents attacker activity from contaminating the baseline. Edge cases: new users (exclude or flag), role changes (enrich with directory data), seasonal variation (extend baseline window). Validate baselines before hunting against them.
Working with false positives (TH1.11). Four FP categories: infrastructure, role-based, temporal, onboarding. Each hunt’s FP analysis produces the exclusion list for the detection rule. Hunt-informed rules start at 70–90% true positive rate versus 10% for rules deployed without FP analysis.
Escalation protocols (TH1.12). The escalation package: finding summary, evidence timeline, recommended containment, hunt context. Warm handoff for high-severity. Two scenarios: escalate and continue (single entity) or escalate and merge (wide scope). Speed at escalation determines whether the dwell time compression is realized.
Worked end-to-end example (TH1.13). Complete Hunt Cycle from TI report through five queries to one confirmed compromise (OAuth consent phishing, 43-day dwell time) and one detection rule deployed. The model for every campaign you execute.
Hunt cadence and scheduling (TH1.14). Three models: weekly (5+ analysts), biweekly (3–5), monthly (2–3 or solo). All viable. Consistency matters more than frequency. Calendar blocks are non-negotiable. Rotational staffing to start; dedicated if the program earns it.
Quality assurance (TH1.15). Three review points: before hunting (hypothesis/scope), before closing (hunt record), before deploying (detection rule). Total QA overhead: ~20 minutes per hunt. A false negative from a methodology error is worse than no hunt — it creates incorrect documented assurance.
What comes next
TH2 teaches the advanced KQL patterns — make-series, series_decompose_anomalies(), autocluster(), top-nested, graph semantics — that the campaign modules apply. If you are already fluent in time-series analysis and behavioral clustering, skim TH2 and proceed to TH3.
TH3 is where the Hunt Cycle becomes operational. The ATT&CK coverage analysis exercise produces your first hunt backlog — the prioritized list of hypotheses that feeds every campaign you run afterward. TH3 is the bridge between methodology and execution.
TH4–TH13 are the campaigns. Each module follows the Hunt Cycle from hypothesis through conversion. Each produces documented findings and at least one detection rule. Pick the campaign most relevant to your highest-priority gap from TH3, or work through them in sequence.
How was this module?
Your feedback helps us improve the course. One click is enough — comments are optional.
You're reading the free modules of this course
The full course continues with advanced topics, production detection rules, worked investigation scenarios, and deployable artifacts. Premium subscribers get access to all courses.