In this section
TH1.7 The Hunt Documentation Standard
Why documentation is non-negotiable
You will be tempted to skip this. The hunt is exciting. The queries are interesting. The analysis requires focus. Documenting each step feels like overhead that slows you down.
It is not overhead. It is output. Without documentation:
// Hunt program output: detection rules produced from hunts
SecurityAlert
| where TimeGenerated > ago(180d)
| where ProviderName == "ASI Scheduled Alerts"
| where AlertName startswith "HUNT-"
| summarize
AlertCount = count(),
FirstFired = min(TimeGenerated),
LastFired = max(TimeGenerated)
by AlertName
| sort by FirstFired asc
// Each row = one detection rule that exists because a hunt produced it
// Count the rows = number of hunt-derived detections in production
// This number should grow by ~1 per month in a healthy hunting program
// Rules with zero AlertCount in 90 days may need validation (TH0.4)Try it yourself
Exercise: Complete your first hunt record
Using the work from TH1.1 through TH1.6 exercises, compile your hunt record using the template above. Fill in all seven sections.
If you completed the identity compromise hunt through TH1.3–TH1.5 exercises, you already have the data for sections 2–6. Section 7 (detection conversion) requires adapting your hunt query as described in TH1.6 and deploying it.
Store the hunt record in your team's documentation system — SharePoint, Confluence, a shared folder, or a dedicated hunt log. The storage location matters less than the discipline of recording every hunt consistently.
The myth: Documentation is a compliance requirement, not an operational one. If the organization is not audit-driven, hunt documentation is unnecessary overhead.
The reality: Documentation serves the analyst first, the organization second, and auditors third. The analyst benefits because documented hunts are repeatable — when the same hypothesis needs re-testing in six months, the query chain is preserved. The organization benefits because hunt records accumulate into institutional knowledge — the next analyst who joins the team inherits a library of validated hypotheses, tested queries, and environmental baselines. Auditors benefit last, but the fact that documentation satisfies compliance is a bonus, not the purpose. The primary purpose is operational: making hunting knowledge persistent rather than personal.
Extend this standard
The template above is the minimum viable documentation. Organizations with mature hunting programs often extend it with: executive summary (one paragraph for leadership), MITRE ATT&CK heat map overlay (showing where this hunt falls on the coverage map), estimated dwell time compression (if a compromise was found, how many days of dwell time did hunting save?), and cost avoidance estimate (using the ROI model from TH0.7). TH15 covers these extensions for organizations building formal hunt reporting capabilities.
References Used in This Subsection
- MITRE ATT&CK Techniques referenced: T1098.003 (Account Manipulation: Additional Cloud Roles)
- Course cross-references: TH0.7 (ROI metrics), TH1.1–TH1.6 (Hunt Cycle steps), TH15 (extended hunt reporting)
This detection capability integrates with the broader NE detection program — each rule contributes to the cumulative ATT&CK coverage that transforms NE from 7.2% baseline to 35%+ target coverage.
Your hunt report contains 5 findings. 3 are true positives requiring remediation. 2 are 'interesting anomalies' that you cannot definitively classify. Do you include the 2 anomalies in the report?
Yes — with explicit classification. Label them 'Inconclusive — recommend monitoring.' An anomaly that cannot be classified today may become a confirmed finding when additional context emerges (the user leaves the company, a related incident occurs, a new TI advisory describes the same pattern). Excluding inconclusive findings from the report loses the institutional memory. Including them with honest classification sets expectations: 'We found this. We could not determine if it is malicious. We recommend monitoring for recurrence.'
You understand the detection gap and the hunt cycle.
TH0 showed you what detection rules fundamentally cannot catch. TH1 gave you the hypothesis-driven methodology that closes that gap. Now you run the hunts.
- 10 complete hunt campaigns — from hypothesis through KQL execution through finding disposition, each campaign based on a real TTP
- 70 production hunt queries — every one mapped to MITRE ATT&CK and tested against realistic telemetry
- Advanced KQL for hunting — UEBA composite risk scoring, retroactive IOC sweeps, and hunt management metrics
- Hypothesis-Driven Hunt Toolkit lab pack — 30 days of realistic M365 and endpoint telemetry with multiple attack patterns seeded in
- TH16 — Scaling hunts across a team — the operating model for a production hunt program