In this section
TH0.15 Your First 90 Days: From Zero to Operating
90 days to an operating program
The roadmap assumes you are at HMM0 or HMM1 (TH0.12). You have Sentinel deployed. You have analysts who can write KQL. You have leadership support (TH0.13) or at least implicit approval to dedicate 4+ hours per week. If any of these are missing, address them first — this roadmap starts after the prerequisites from TH0.8 are met.
Weeks 1–2: Foundation
Goal: Confirm readiness and establish baseline metrics.
Week 1:
// Week 1 summary query: your hunting program starting position
let coverageNumerator = toscalar(
SecurityAlert
| where TimeGenerated > ago(90d)
| where ProviderName == "ASI Scheduled Alerts"
| extend Techniques = tostring(
parse_json(ExtendedProperties).["Techniques"])
| where isnotempty(Techniques) and Techniques != "[]"
| summarize by Techniques | count);
let dwellMetrics = SecurityIncident
| where TimeGenerated > ago(180d)
| where Status == "Closed"
| extend EarliestEvidence = todatetime(
parse_json(tostring(AdditionalData)).firstActivityTimeUtc)
| where isnotempty(EarliestEvidence)
| extend DwellDays = datetime_diff('day', CreatedTime, EarliestEvidence)
| where DwellDays >= 0 and DwellDays < 365
| summarize Median = percentile(DwellDays, 50),
P90 = percentile(DwellDays, 90);
print CoveredTechniques = coverageNumerator
// Combine with your ATT&CK Navigator denominator for the full ratio
// Record both numbers — they are your Day 0 baselineTry it yourself
Exercise: Build your 90-day calendar
Open your calendar (or a planning document) and block the following:
Week 1: 4 hours — readiness assessment, data source audit, baseline metrics.
Week 2: 4 hours — HMM assessment, metrics setup, cadence definition, TH1 methodology review.
Weeks 3–4: 6 hours — TH3 ATT&CK coverage analysis (when available), backlog creation.
Weeks 5–8: 6 hours per campaign × 3 = 18 hours — three hunt campaigns with full documentation and rule conversion.
Weeks 9–12: 4 hours — rule validation, re-measurement, quarterly report compilation.
Total: ~36 hours over 12 weeks.** That is 3 hours per week on average. Achievable alongside a full-time SOC role with protected time.
The myth: Hunting programs are long-term investments that do not produce measurable results for 6–12 months. Leadership should expect a long ramp-up before seeing value.
The reality: The first hunt campaign — which can be completed in week 5 of this roadmap — produces a documented finding (positive or negative) and a detection rule. That is measurable output from week 5. By day 90, the program has produced 3 hunt records, 3 detection rules, and a quantified coverage improvement. The results are immediate and compounding. The "6–12 month" timeline describes organizational culture change and full maturity (HMM3+), not the time to first measurable output. The first output comes in weeks, not months.
Extend this roadmap
The 90-day plan covers the first quarter. Quarters 2–4 follow the same cadence: 3 campaigns per quarter from the prioritized backlog, 3 detection rules produced per quarter, quarterly metrics and report. After four quarters, you have 12 hunt records, 12+ detection rules, and a full year of metrics data showing the coverage trend. That annual dataset is the evidence for expanding the program — more hours, more analysts, or more ambitious campaigns. The first year builds the case. The second year scales it.
References Used in This Subsection
- Course cross-references: TH0.1 (coverage), TH0.2 (dwell time), TH0.8 (readiness), TH0.10 (data sources), TH0.12 (HMM), TH0.14 (metrics), TH1 (methodology), TH3 (coverage analysis)
NE operational context
This detection operates within NE's 18 GB/day Sentinel ingestion environment across 20 connected data sources. The rule's alert volume, TP rate, and SOC triage burden are calibrated for NE's 3-person SOC team handling 7-16 incidents per day. The detection engineer (Rachel) reviews this rule's health during the monthly tuning review (DE9.9) and adjusts thresholds, exclusions, and entity mapping as the environment evolves.
The rule's position in the overall detection library means it correlates with rules from adjacent kill chain phases — an alert from this rule gains significance when combined with alerts from earlier or later phases targeting the same entity.
Interactive lab: walk the attack chain your detections missed
CHAIN-HARVEST succeeded because no detection rule caught the complete sequence. Walk the attack and identify where a hunt — not a detection rule — would have found the attacker. This is the case for hunting: finding threats that operate in your detection blind spots.
How the hands-on experience works
Every KQL example in this course shows the query AND the expected results. You read the query, study the output, and understand what the data reveals — without needing a separate lab environment.
The interactive labs embedded throughout the course provide the practice layer. Parameter sandboxes let you tune detection thresholds and see the impact in real time. Alert simulators present realistic triage queues. Investigation engines walk you through multi-step investigations with branching decisions. All of this runs in your browser — no setup required.
You have time for one hunt this quarter. Do you hunt for the threat in the latest advisory or for the gap in your ATT&CK coverage matrix?
Hunt the coverage gap. Advisories describe threats that are CURRENT but may not target NE. Coverage gaps describe techniques that COULD target NE and would succeed undetected. The coverage gap hunt produces a detection rule (closing the gap permanently). The advisory-driven hunt produces a point-in-time assessment (confirming the specific threat is not present today). Both are valuable — but the coverage gap hunt has a longer-lasting impact because it produces a permanent detection improvement.
You understand the detection gap and the hunt cycle.
TH0 showed you what detection rules fundamentally cannot catch. TH1 gave you the hypothesis-driven methodology that closes that gap. Now you run the hunts.
- 10 complete hunt campaigns — from hypothesis through KQL execution through finding disposition, each campaign based on a real TTP
- 70 production hunt queries — every one mapped to MITRE ATT&CK and tested against realistic telemetry
- Advanced KQL for hunting — UEBA composite risk scoring, retroactive IOC sweeps, and hunt management metrics
- Hypothesis-Driven Hunt Toolkit lab pack — 30 days of realistic M365 and endpoint telemetry with multiple attack patterns seeded in
- TH16 — Scaling hunts across a team — the operating model for a production hunt program