In this section
TH1.2 Scoping the Hunt
Scope before you query
The hypothesis tells you what to look for. The scope tells you where to look. Skip this step and one of two things happens: you query everything, get 500,000 rows, and cannot distinguish signal from noise. Or you query too narrowly, get zero rows, and conclude the threat is absent when it was just outside your filter.
Scope has four dimensions. Define all four before writing the first KQL line.
Dimension 1: Data sources
// Verify the table has recent data before building a complex hunt query
AADNonInteractiveUserSignInLogs
| where TimeGenerated > ago(1d)
| count
// If this returns 0, the table is not ingested or is delayed
// Do not proceed with a hunt that depends on this table until confirmed// Example: 30-day baseline, 7-day detection window
let baselineStart = ago(37d); // 30 days before detection window
let baselineEnd = ago(7d); // End of baseline = start of detection
let detectionStart = ago(7d);
let detectionEnd = now();
// Baseline query
let baseline = SigninLogs
| where TimeGenerated between (baselineStart .. baselineEnd)
| summarize NormalIPs = make_set(IPAddress, 20) by UserPrincipalName;
// Detection query — compare current against baseline
SigninLogs
| where TimeGenerated between (detectionStart .. detectionEnd)
| join kind=inner baseline on UserPrincipalName
| where not(IPAddress in (NormalIPs))
// Findings: sign-ins from IPs not seen in the baseline periodTry it yourself
Exercise: Scope one of your hypotheses
Take one of the three hypotheses you wrote in TH1.1. Define the four scope dimensions:
Data sources: Which tables will you query? Confirm they are ingested.
Time window: How far back will you search? Do you need a separate baseline window?
Population: Full tenant, or a targeted subset? If targeted, what is excluded and why?
Success criteria: What constitutes a positive finding? What constitutes adequate coverage for a negative finding?
Write the scope definition in 3–5 sentences. If you cannot define all four dimensions, the hypothesis needs refinement before you hunt.
The myth: A hunt covering 365 days is more thorough than one covering 30 days. Longer windows catch more.
The reality: Longer windows produce more data and more noise without necessarily producing more signal. A 365-day hunt against SigninLogs may return millions of rows and overwhelm both the query engine (Advanced Hunting has execution time limits) and the analyst. The appropriate window depends on the technique: active credential abuse is detectable in 7–30 days. Long-dwell APT persistence may require 90+ days. C2 beaconing may be detectable in 7 days of network data. Longer is not better. Matched is better — match the window to the technique's expected dwell time and the data's noise level.
Extend this approach
If your Sentinel workspace has data retention beyond 90 days (configured via retention policies or Log Analytics archive), you can use Sentinel search jobs to hunt in archived data. Search jobs run asynchronously against long-retention storage, allowing hunts that span 6–12 months of historical data without the interactive query timeout constraints of Advanced Hunting. TH16 covers search jobs in detail. For most campaign modules in this course, the standard 30-day window in Advanced Hunting is sufficient.
References Used in This Subsection
- Microsoft. "Advanced Hunting — Query Limits and Quotas." Microsoft Learn. https://learn.microsoft.com/en-us/defender-xdr/advanced-hunting-limits
- Microsoft. "Search Jobs in Microsoft Sentinel." Microsoft Learn. https://learn.microsoft.com/en-us/azure/sentinel/search-jobs
You have time for one hunt this quarter. Do you hunt for the threat in the latest advisory or for the gap in your ATT&CK coverage matrix?
Hunt the coverage gap. Advisories describe threats that are CURRENT but may not target NE. Coverage gaps describe techniques that COULD target NE and would succeed undetected. The coverage gap hunt produces a detection rule (closing the gap permanently). The advisory-driven hunt produces a point-in-time assessment (confirming the specific threat is not present today). Both are valuable — but the coverage gap hunt has a longer-lasting impact because it produces a permanent detection improvement.
You understand the detection gap and the hunt cycle.
TH0 showed you what detection rules fundamentally cannot catch. TH1 gave you the hypothesis-driven methodology that closes that gap. Now you run the hunts.
- 10 complete hunt campaigns — from hypothesis through KQL execution through finding disposition, each campaign based on a real TTP
- 70 production hunt queries — every one mapped to MITRE ATT&CK and tested against realistic telemetry
- Advanced KQL for hunting — UEBA composite risk scoring, retroactive IOC sweeps, and hunt management metrics
- Hypothesis-Driven Hunt Toolkit lab pack — 30 days of realistic M365 and endpoint telemetry with multiple attack patterns seeded in
- TH16 — Scaling hunts across a team — the operating model for a production hunt program