In this section

TH1.7 The Hunt Documentation Standard

3-4 hours · Module 1 · Free
Operational Objective
An undocumented hunt never happened — as far as the organization is concerned. The hunt record is what transforms individual analyst effort into organizational knowledge. It captures the hypothesis, the scope, every query run, the analysis reasoning, the conclusion, and the detection rule produced. This subsection defines the documentation standard used throughout the course, provides the complete template, and walks through a worked example.
Deliverable: The complete hunt documentation template and the discipline to use it on every hunt.
⏱ Estimated completion: 25 minutes

Why documentation is non-negotiable

You will be tempted to skip this. The hunt is exciting. The queries are interesting. The analysis requires focus. Documenting each step feels like overhead that slows you down.

It is not overhead. It is output. Without documentation:

// Hunt program output: detection rules produced from hunts
SecurityAlert
| where TimeGenerated > ago(180d)
| where ProviderName == "ASI Scheduled Alerts"
| where AlertName startswith "HUNT-"
| summarize
    AlertCount = count(),
    FirstFired = min(TimeGenerated),
    LastFired = max(TimeGenerated)
    by AlertName
| sort by FirstFired asc
// Each row = one detection rule that exists because a hunt produced it
// Count the rows = number of hunt-derived detections in production
// This number should grow by ~1 per month in a healthy hunting program
// Rules with zero AlertCount in 90 days may need validation (TH0.4)
Expand for Deeper Context

- The hunt cannot be repeated by another analyst (or by you, three months from now, when you have forgotten the details) - The negative finding cannot be cited in a compliance audit - The detection rule produced lacks traceability to the evidence that justified it - The organization cannot measure hunting program effectiveness (TH0.7 metrics) - The knowledge from the hunt dies with the analyst's memory

The documentation standard does not require a formal report for every hunt. It requires structured notes — completable in 15–20 minutes after the hunt — that capture the information needed for all five purposes above.

HUNT RECORD — SEVEN SECTIONS, ONE PER CYCLE STEP 1. HEADER ID, analyst, dates 2. HYPOTHESIS Statement, source, ATT&CK technique 3. SCOPE Data, time, population 4. COLLECTION Query log with full KQL preserved 5. ANALYSIS Enrichment per result 6. CONCLUSION Confirmed / Refuted / Inconclusive 7. CONVERT Rule name, ID, severity, deploy date Complete each section as you finish the corresponding Hunt Cycle step — not retrospectively. Total documentation time: 15–20 minutes per hunt (incremental across the session).

Figure TH1.7 — Hunt record structure. Seven sections map to the six Hunt Cycle steps plus an administrative header. The record is completed incrementally during the hunt, not as a post-hunt report.

The hunt record template

Every hunt in this course produces a hunt record with these sections. The template is designed to be completed incrementally — fill in each section as you complete the corresponding Hunt Cycle step, not as a retrospective report after the hunt is finished.

Section 1: Header

Hunt ID: TH-[YYYY]-[NNN] (e.g., TH-2026-001) Analyst: [name] Date started: [date] Date completed: [date] Status: [In progress / Completed — Confirmed / Completed — Refuted / Completed — Inconclusive]

Section 2: Hypothesis (from step 1)

Hypothesis statement: [specific, testable, grounded, actionable] Source: [ATT&CK gap / prior incident / TI report / environmental change / rule failure / peer community] Source reference: [report URL, incident ID, ATT&CK technique ID, etc.] ATT&CK technique(s): [T-number and name]

Section 3: Scope (from step 2)

Data sources: [table names] Time window — detection: [start] to [end] Time window — baseline: [start] to [end] (if applicable) Population: [full tenant / targeted — specify] Success criteria — positive: [what constitutes a finding] Success criteria — negative: [what constitutes adequate coverage]

Section 4: Collection (from step 3)

Query log:

For each query step, record: the step number, its purpose (orientation, indicator, enrichment, or pivot), the table(s) queried, the result count, the assessment (expected or unexpected volume), and the decision taken (narrow, pivot, escalate, or close).

[Full KQL for each query preserved — copy-paste from Advanced Hunting]

Section 5: Analysis (from step 4)

Suspect results: [N] Per-result enrichment:

For each result under analysis, record the assessment across each dimension: user context (role, access history), temporal context (time of day, event correlation), geographic context (IP location, travel patterns), behavioral context (deviation from baseline), the number of correlated dimensions (how many dimensions support the suspicion), and the confidence level (High, Medium, or Low).

Section 6: Conclusion (from step 5)

Outcome: [Confirmed / Refuted / Inconclusive] Finding summary: [2-3 sentences] If confirmed: Escalated as incident [ID]. Containment actions: [list]. If refuted: Negative finding documented. No evidence of [technique] in [data] across [period]. If inconclusive: Gap preventing resolution: [description]. Refined hypothesis added to backlog as [ID].

Section 7: Detection conversion (from step 6)

Query converted: [Y/N] If Y: Rule name: [HUNT-campaign-sequence: description] Rule ID: [Sentinel analytics rule GUID] Severity: [H/M/L/Info] ATT&CK mapping: [technique ID] Deployed date: [date] Validation period: [14 days from deployment] If N: Reason: [query too complex for scheduled execution / technique better suited for periodic hunting / etc.]

Worked example: abbreviated hunt record

Hunt ID: TH-2026-003 Analyst: R. Okafor Date: 2026-03-28 to 2026-03-28 Status: Completed — Refuted

Hypothesis: If an attacker has consented to a malicious OAuth application with Mail.ReadWrite permissions, AuditLogs will contain "Consent to application" operations from non-admin users granting delegated permissions including Mail.ReadWrite or Files.ReadWrite.All in the last 90 days. Source: ATT&CK coverage gap (no detection rule for T1098.003 in current analytics rules) ATT&CK: T1098.003 — Account Manipulation: Additional Cloud Roles

Scope: AuditLogs + AADServicePrincipalSignInLogs | 90 days | Full tenant | Positive: user-consented app with Mail.ReadWrite from non-IT user | Negative: full tenant examined, no matching consent events

Collection: Q1: Orientation — consent events in 90d. 347 consent operations. Expected for org size. Q2: Filter to delegated permissions including Mail.ReadWrite or Files.ReadWrite.All. 12 results. Q3: Filter to non-admin users (exclude IT department service accounts). 4 results. Q4: Enrichment — app registration details, sign-in activity post-consent. All 4 apps are known productivity tools (Grammarly, Adobe Acrobat, Zoom, Microsoft To-Do). Sign-in patterns normal. No data access anomalies post-consent.

Analysis: 4 results, all legitimate productivity applications consented by users during onboarding. No malicious or unknown applications found. Confidence: no finding.

Conclusion: Refuted. No evidence of malicious OAuth consent in 90-day window across full tenant. 4 user-consented apps reviewed and confirmed legitimate.

Detection conversion: Yes. Rule: HUNT-TH6-001: High-privilege OAuth consent by non-admin user Deploys Q2 logic (consent operations with sensitive permissions from non-admin users) as a scheduled analytics rule, running every 4 hours with 6-hour lookback. Exclusions: known productivity apps (Grammarly, Adobe, Zoom, To-Do) added to allowlist based on hunt analysis. Severity: Medium. Deployed: 2026-03-28. Validation: 14 days.

This example took 6 hours of hunting and 20 minutes of documentation. The output: a negative finding that confirms no OAuth abuse in the last 90 days, and a permanent detection rule that monitors for it going forward. Before this hunt, the organization had zero visibility into this technique. After the hunt, they have documented assurance and automated monitoring. That is the value of one structured hunt.

Use the HUNT- prefix naming convention consistently. This query tracks your hunt-derived detection rules — the tangible output of your hunting program:

Try it yourself

Exercise: Complete your first hunt record

Using the work from TH1.1 through TH1.6 exercises, compile your hunt record using the template above. Fill in all seven sections.

If you completed the identity compromise hunt through TH1.3TH1.5 exercises, you already have the data for sections 2–6. Section 7 (detection conversion) requires adapting your hunt query as described in TH1.6 and deploying it.

Store the hunt record in your team's documentation system — SharePoint, Confluence, a shared folder, or a dedicated hunt log. The storage location matters less than the discipline of recording every hunt consistently.

⚠ Compliance Myth: "Hunt documentation is only necessary for auditors"

The myth: Documentation is a compliance requirement, not an operational one. If the organization is not audit-driven, hunt documentation is unnecessary overhead.

The reality: Documentation serves the analyst first, the organization second, and auditors third. The analyst benefits because documented hunts are repeatable — when the same hypothesis needs re-testing in six months, the query chain is preserved. The organization benefits because hunt records accumulate into institutional knowledge — the next analyst who joins the team inherits a library of validated hypotheses, tested queries, and environmental baselines. Auditors benefit last, but the fact that documentation satisfies compliance is a bonus, not the purpose. The primary purpose is operational: making hunting knowledge persistent rather than personal.

Extend this standard

The template above is the minimum viable documentation. Organizations with mature hunting programs often extend it with: executive summary (one paragraph for leadership), MITRE ATT&CK heat map overlay (showing where this hunt falls on the coverage map), estimated dwell time compression (if a compromise was found, how many days of dwell time did hunting save?), and cost avoidance estimate (using the ROI model from TH0.7). TH15 covers these extensions for organizations building formal hunt reporting capabilities.


References Used in This Subsection

  • MITRE ATT&CK Techniques referenced: T1098.003 (Account Manipulation: Additional Cloud Roles)
  • Course cross-references: TH0.7 (ROI metrics), TH1.1TH1.6 (Hunt Cycle steps), TH15 (extended hunt reporting)

This detection capability integrates with the broader NE detection program — each rule contributes to the cumulative ATT&CK coverage that transforms NE from 7.2% baseline to 35%+ target coverage.

Decision point

Your hunt report contains 5 findings. 3 are true positives requiring remediation. 2 are 'interesting anomalies' that you cannot definitively classify. Do you include the 2 anomalies in the report?

Yes — with explicit classification. Label them 'Inconclusive — recommend monitoring.' An anomaly that cannot be classified today may become a confirmed finding when additional context emerges (the user leaves the company, a related incident occurs, a new TI advisory describes the same pattern). Excluding inconclusive findings from the report loses the institutional memory. Including them with honest classification sets expectations: 'We found this. We could not determine if it is malicious. We recommend monitoring for recurrence.'

A hunt query returns 200 results. You have 4 hours remaining in the hunt window. You can investigate 20 results thoroughly or review all 200 superficially. Which approach produces better hunt outcomes?
Review all 200 — you might miss a critical finding in the 180 you skip.
Investigate 20 thoroughly. A superficial review of 200 results produces 200 'looked at it, seemed okay' assessments that provide no investigative value and no documentation for future reference. A thorough investigation of 20 results produces: confirmed findings (true positives requiring remediation), confirmed benign patterns (documented baselines for future comparison), and inconclusive results (flagged for monitoring). Prioritise the 20 by: highest anomaly score, highest-value assets involved, and highest-risk users involved. Document why the remaining 180 were not investigated and recommend a follow-up hunt with refined query criteria to reduce the result set.
Investigate 20 — but only if they are from the most recent 24 hours.
Neither — refine the query first to reduce the result set below 50.

You understand the detection gap and the hunt cycle.

TH0 showed you what detection rules fundamentally cannot catch. TH1 gave you the hypothesis-driven methodology that closes that gap. Now you run the hunts.

  • 10 complete hunt campaigns — from hypothesis through KQL execution through finding disposition, each campaign based on a real TTP
  • 70 production hunt queries — every one mapped to MITRE ATT&CK and tested against realistic telemetry
  • Advanced KQL for hunting — UEBA composite risk scoring, retroactive IOC sweeps, and hunt management metrics
  • Hypothesis-Driven Hunt Toolkit lab pack — 30 days of realistic M365 and endpoint telemetry with multiple attack patterns seeded in
  • TH16 — Scaling hunts across a team — the operating model for a production hunt program
Unlock the full course with Premium See Full Syllabus