TH1.5 Concluding the Hunt
Three possible outcomes
Every hunt ends in one of three states. The conclusion must be explicit — written down, not implied.
Figure TH1.5 — Three hunt outcomes. Confirmed and refuted hunts both produce detection rules. Inconclusive hunts produce refined hypotheses. Every outcome produces documentation.
Outcome 1: Hypothesis confirmed — compromise found
The analysis produced a high-confidence finding. Correlated evidence across three or more enrichment dimensions supports the conclusion that the technique described in the hypothesis has occurred in your environment.
Immediate action: Escalate to IR. The escalation package includes:
- The finding: what was discovered, which user/device/application is affected
- The evidence: the query chain that produced the finding, with result data
- The timeline: when the anomalous activity began, the dwell time estimate
- The initial scope assessment: how many accounts, devices, or systems appear affected
- The recommended containment: session revocation, password reset, OAuth consent removal, inbox rule deletion — whatever is appropriate for the technique
Do not wait for the hunt to complete before escalating a confirmed compromise. If query 3 of 8 produces a high-confidence finding, escalate immediately. The remaining queries can continue in parallel with the IR response, or they can be incorporated into the IR investigation scope.
The escalation is the highest-value hunting output. It is a compromise discovered before any detection rule fired — an intrusion that would have continued undetected without the hunt.
When you escalate, calculate the dwell time compression — the key metric that proves hunting’s value for this specific finding:
| |
Outcome 2: Hypothesis refuted — no evidence found
The hunt examined the full scope (all data sources, full time window, entire population) and found no evidence supporting the hypothesis. The technique either has not occurred in your environment during the hunt window or has occurred in a way that left no trace in the available telemetry.
This is a positive outcome. Document it:
“Hunt [ID]: Tested hypothesis [statement]. Examined [data sources] across [time window] for [population]. No evidence of [technique] found. Hypothesis refuted for the period [start date] to [end date].”
The documentation serves four purposes. It reduces organizational uncertainty — before the hunt, you did not know whether this technique had been used against you. Now you know it has not (within the bounds of your data). It satisfies compliance requirements for proactive monitoring. It establishes a baseline for future comparison. And it feeds the Convert step — the hunt query, now validated against your data, becomes a candidate for deployment as a scheduled analytics rule.
Outcome 3: Hypothesis inconclusive
The hunt produced results that are ambiguous after analysis. One or two enrichment dimensions show anomalies, but correlated evidence is insufficient for a high-confidence finding. Legitimate explanations remain plausible.
This outcome requires judgment. Three options:
Investigate further within the hunt. Run additional enrichment queries. Expand to adjacent data sources. Check longer time windows. If additional analysis resolves the ambiguity (either confirming or refuting), conclude accordingly.
Escalate with caveats. If the ambiguous finding has high enough potential impact (privilege escalation, data exfiltration indicators), escalate to IR as a medium-confidence lead rather than a confirmed finding. The IR team investigates further with access to additional context (user interviews, endpoint forensics) that the hunter may not have.
Document and re-hunt. If the ambiguity cannot be resolved with available data, document the inconclusive finding with the specific gap that prevented resolution: “Unable to determine whether user X’s SharePoint access was legitimate without confirming their project involvement — manager confirmation needed.” Add a refined hypothesis to the backlog for the next hunt cycle, incorporating the additional data source or context needed.
The inconclusive outcome is operationally uncomfortable but methodologically sound. Pretending ambiguous results are either confirmed or refuted is worse than documenting the ambiguity honestly. The hunt record for an inconclusive result should explain what was found, what prevented a definitive conclusion, and what would resolve the ambiguity.
Documenting negative findings
Negative findings deserve the same documentation rigor as positive findings. The temptation is to document positive findings thoroughly (they are interesting) and dismiss negative findings with a single sentence (they are not). This asymmetry undermines the hunting program because it makes the program appear unproductive.
A well-documented negative finding answers:
- What hypothesis was tested?
- What data sources were examined?
- What time window was covered?
- What population was included?
- What queries were run (with result counts)?
- What was the conclusion?
- What detection rule was produced from the hunt query?
This documentation proves the organization is proactively monitoring for the technique. It proves the technique was not found — which is different from proving it was not looked for. And it provides the raw material for the Convert step, where the hunt query becomes a permanent detection.
Try it yourself
Exercise: Write a hunt conclusion
Using the results from TH1.3 and TH1.4 exercises, write a formal conclusion. Use one of the three outcome templates:
If you found a high-confidence finding: Write the escalation package — finding, evidence, timeline, scope assessment, recommended containment.
If you found no evidence: Write the negative finding documentation — hypothesis, data sources, time window, population, conclusion, and note which query is a candidate for detection rule conversion.
If results were ambiguous: Write the inconclusive documentation — what was found, what prevented resolution, and what would resolve it.
This conclusion is the fifth section of your hunt record. It determines what happens next: IR escalation, detection rule creation, or backlog refinement.
The myth: If the hunt does not produce a clear yes or no, the methodology failed. Hunts should always reach a definitive conclusion.
The reality: Real data is ambiguous. Attackers deliberately create ambiguity — their techniques are designed to look like legitimate activity. An inconclusive result that honestly documents the ambiguity, identifies the specific gap that prevented resolution, and adds a refined hypothesis to the backlog is more valuable than a false-confident conclusion that closes the investigation prematurely. Inconclusive hunts also reveal environmental limitations — missing data sources, inadequate baselines, insufficient enrichment — that, when addressed, improve the next hunt’s ability to reach a conclusion.
Extend this approach
In organizations with formal SOC reporting, hunt conclusions should be included in monthly or quarterly security operations reports. The format: "X hunts conducted. Y hypotheses confirmed (escalated to IR). Z hypotheses refuted (negative findings documented). W hypotheses inconclusive (refined and re-queued)." This reporting demonstrates proactive security activity to leadership and audit. TH15 covers hunt reporting for leadership audiences in detail.
References Used in This Subsection
- Course cross-references: TH0.7 (value of negative findings), TH1.6 (detection rule conversion), TH15 (hunt reporting for leadership)
You're reading the free modules of this course
The full course continues with advanced topics, production detection rules, worked investigation scenarios, and deployable artifacts. Premium subscribers get access to all courses.