In this section
TH1.5 Concluding the Hunt
Three possible outcomes
Every hunt ends in one of three states. The conclusion must be explicit — written down, not implied.
Figure TH1.5 — Three hunt outcomes. Confirmed and refuted hunts both produce detection rules. Inconclusive hunts produce refined hypotheses. Every outcome produces documentation.
Outcome 1: Hypothesis confirmed — compromise found
The analysis produced a high-confidence finding. Correlated evidence across three or more enrichment dimensions supports the conclusion that the technique described in the hypothesis has occurred in your environment.
// Dwell time compression for a hunt-discovered compromise
// Run after identifying the compromised account and earliest evidence
let compromisedUser = "j.morrison@northgateeng.com";
let huntDiscoveryDate = datetime(2026-03-28);
// Find the earliest anomalous activity for this user
SigninLogs
| where TimeGenerated between (ago(90d) .. huntDiscoveryDate)
| where UserPrincipalName == compromisedUser
| where IPAddress in ("203.0.113.47") // Attacker IP from hunt finding
| summarize EarliestAttackerActivity = min(TimeGenerated)
| extend HuntDiscovery = huntDiscoveryDate
| extend DwellDays = datetime_diff('day', HuntDiscovery, EarliestAttackerActivity)
// DwellDays = how long the attacker was present before hunting found them
// Without hunting, dwell time would have continued until a rule fired
// or external notification arrived — potentially weeks or months longer
// The difference is the dwell time compression attributable to this huntTry it yourself
Exercise: Write a hunt conclusion
Using the results from TH1.3 and TH1.4 exercises, write a formal conclusion. Use one of the three outcome templates:
If you found a high-confidence finding: Write the escalation package — finding, evidence, timeline, scope assessment, recommended containment.
If you found no evidence: Write the negative finding documentation — hypothesis, data sources, time window, population, conclusion, and note which query is a candidate for detection rule conversion.
If results were ambiguous: Write the inconclusive documentation — what was found, what prevented resolution, and what would resolve it.
This conclusion is the fifth section of your hunt record. It determines what happens next: IR escalation, detection rule creation, or backlog refinement.
The documentation requirement
A hunt without documentation is an investigation that happened once and taught nobody. The hunt conclusion document captures: the original hypothesis, the data sources queried, the KQL patterns used, the findings (positive or negative), and the recommended actions. Negative findings — hunts that confirmed the hypothesis was wrong — are valuable because they narrow the search space for future hunts and validate that certain threat scenarios are not present in the environment. Document negative findings with the same rigor as positive ones.
The myth: If the hunt does not produce a clear yes or no, the methodology failed. Hunts should always reach a definitive conclusion.
The reality: Real data is ambiguous. Attackers deliberately create ambiguity — their techniques are designed to look like legitimate activity. An inconclusive result that honestly documents the ambiguity, identifies the specific gap that prevented resolution, and adds a refined hypothesis to the backlog is more valuable than a false-confident conclusion that closes the investigation prematurely. Inconclusive hunts also reveal environmental limitations — missing data sources, inadequate baselines, insufficient enrichment — that, when addressed, improve the next hunt's ability to reach a conclusion.
Extend this approach
In organizations with formal SOC reporting, hunt conclusions should be included in monthly or quarterly security operations reports. The format: "X hunts conducted. Y hypotheses confirmed (escalated to IR). Z hypotheses refuted (negative findings documented). W hypotheses inconclusive (refined and re-queued)." This reporting demonstrates proactive security activity to leadership and audit. TH15 covers hunt reporting for leadership audiences in detail.
References Used in This Subsection
- Course cross-references: TH0.7 (value of negative findings), TH1.6 (detection rule conversion), TH15 (hunt reporting for leadership)
NE environmental considerations
NE's detection environment includes specific factors that influence this rule's operation:
Device diversity: 768 P2 corporate workstations with full Defender for Endpoint telemetry, 58 P1 manufacturing workstations with basic cloud-delivered protection, and 3 RHEL rendering servers with Syslog-only coverage. Rules targeting DeviceProcessEvents operate with full fidelity on P2 devices but may have reduced visibility on P1 devices. Manufacturing workstations in Sheffield and Sunderland represent a detection gap for endpoint-level detections.
You have time for one hunt this quarter. Do you hunt for the threat in the latest advisory or for the gap in your ATT&CK coverage matrix?
Hunt the coverage gap. Advisories describe threats that are CURRENT but may not target NE. Coverage gaps describe techniques that COULD target NE and would succeed undetected. The coverage gap hunt produces a detection rule (closing the gap permanently). The advisory-driven hunt produces a point-in-time assessment (confirming the specific threat is not present today). Both are valuable — but the coverage gap hunt has a longer-lasting impact because it produces a permanent detection improvement.
You understand the detection gap and the hunt cycle.
TH0 showed you what detection rules fundamentally cannot catch. TH1 gave you the hypothesis-driven methodology that closes that gap. Now you run the hunts.
- 10 complete hunt campaigns — from hypothesis through KQL execution through finding disposition, each campaign based on a real TTP
- 70 production hunt queries — every one mapped to MITRE ATT&CK and tested against realistic telemetry
- Advanced KQL for hunting — UEBA composite risk scoring, retroactive IOC sweeps, and hunt management metrics
- Hypothesis-Driven Hunt Toolkit lab pack — 30 days of realistic M365 and endpoint telemetry with multiple attack patterns seeded in
- TH16 — Scaling hunts across a team — the operating model for a production hunt program