TH1.5 Concluding the Hunt

3-4 hours · Module 1 · Free
Operational Objective
A hunt without a conclusion is a hunt without value. The conclusion step formalizes the outcome — hypothesis confirmed, hypothesis refuted, or hypothesis inconclusive — and triggers the appropriate action: IR escalation, negative finding documentation, or refinement for re-hunt. This subsection teaches you to close hunts decisively, document conclusions for organizational memory, and handle the edge cases that make conclusions ambiguous.
Deliverable: The ability to formally conclude a hunt with a documented outcome, appropriate escalation, and a clear record that the organization can reference.
⏱ Estimated completion: 20 minutes

Three possible outcomes

Every hunt ends in one of three states. The conclusion must be explicit — written down, not implied.

THREE HUNT OUTCOMES — EVERY HUNT ENDS IN ONEANALYSIS COMPLETECONFIRMED3+ dimensions correlated→ Escalate to IR immediately→ Convert query to detection ruleREFUTEDFull scope examined, no evidence→ Document negative finding→ Convert query to detection ruleINCONCLUSIVEAmbiguous — gap prevents resolution→ Document gap + reasoning→ Refine hypothesis, re-queue

Figure TH1.5 — Three hunt outcomes. Confirmed and refuted hunts both produce detection rules. Inconclusive hunts produce refined hypotheses. Every outcome produces documentation.

Outcome 1: Hypothesis confirmed — compromise found

The analysis produced a high-confidence finding. Correlated evidence across three or more enrichment dimensions supports the conclusion that the technique described in the hypothesis has occurred in your environment.

Immediate action: Escalate to IR. The escalation package includes:

  • The finding: what was discovered, which user/device/application is affected
  • The evidence: the query chain that produced the finding, with result data
  • The timeline: when the anomalous activity began, the dwell time estimate
  • The initial scope assessment: how many accounts, devices, or systems appear affected
  • The recommended containment: session revocation, password reset, OAuth consent removal, inbox rule deletion — whatever is appropriate for the technique

Do not wait for the hunt to complete before escalating a confirmed compromise. If query 3 of 8 produces a high-confidence finding, escalate immediately. The remaining queries can continue in parallel with the IR response, or they can be incorporated into the IR investigation scope.

The escalation is the highest-value hunting output. It is a compromise discovered before any detection rule fired — an intrusion that would have continued undetected without the hunt.

When you escalate, calculate the dwell time compression — the key metric that proves hunting’s value for this specific finding:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
// Dwell time compression for a hunt-discovered compromise
// Run after identifying the compromised account and earliest evidence
let compromisedUser = "j.morrison@northgateeng.com";
let huntDiscoveryDate = datetime(2026-03-28);
// Find the earliest anomalous activity for this user
SigninLogs
| where TimeGenerated between (ago(90d) .. huntDiscoveryDate)
| where UserPrincipalName == compromisedUser
| where IPAddress in ("203.0.113.47")  // Attacker IP from hunt finding
| summarize EarliestAttackerActivity = min(TimeGenerated)
| extend HuntDiscovery = huntDiscoveryDate
| extend DwellDays = datetime_diff('day', HuntDiscovery, EarliestAttackerActivity)
// DwellDays = how long the attacker was present before hunting found them
// Without hunting, dwell time would have continued until a rule fired
//   or external notification arrived  potentially weeks or months longer
// The difference is the dwell time compression attributable to this hunt

Outcome 2: Hypothesis refuted — no evidence found

The hunt examined the full scope (all data sources, full time window, entire population) and found no evidence supporting the hypothesis. The technique either has not occurred in your environment during the hunt window or has occurred in a way that left no trace in the available telemetry.

This is a positive outcome. Document it:

“Hunt [ID]: Tested hypothesis [statement]. Examined [data sources] across [time window] for [population]. No evidence of [technique] found. Hypothesis refuted for the period [start date] to [end date].”

The documentation serves four purposes. It reduces organizational uncertainty — before the hunt, you did not know whether this technique had been used against you. Now you know it has not (within the bounds of your data). It satisfies compliance requirements for proactive monitoring. It establishes a baseline for future comparison. And it feeds the Convert step — the hunt query, now validated against your data, becomes a candidate for deployment as a scheduled analytics rule.

Outcome 3: Hypothesis inconclusive

The hunt produced results that are ambiguous after analysis. One or two enrichment dimensions show anomalies, but correlated evidence is insufficient for a high-confidence finding. Legitimate explanations remain plausible.

This outcome requires judgment. Three options:

Investigate further within the hunt. Run additional enrichment queries. Expand to adjacent data sources. Check longer time windows. If additional analysis resolves the ambiguity (either confirming or refuting), conclude accordingly.

Escalate with caveats. If the ambiguous finding has high enough potential impact (privilege escalation, data exfiltration indicators), escalate to IR as a medium-confidence lead rather than a confirmed finding. The IR team investigates further with access to additional context (user interviews, endpoint forensics) that the hunter may not have.

Document and re-hunt. If the ambiguity cannot be resolved with available data, document the inconclusive finding with the specific gap that prevented resolution: “Unable to determine whether user X’s SharePoint access was legitimate without confirming their project involvement — manager confirmation needed.” Add a refined hypothesis to the backlog for the next hunt cycle, incorporating the additional data source or context needed.

The inconclusive outcome is operationally uncomfortable but methodologically sound. Pretending ambiguous results are either confirmed or refuted is worse than documenting the ambiguity honestly. The hunt record for an inconclusive result should explain what was found, what prevented a definitive conclusion, and what would resolve the ambiguity.

Documenting negative findings

Negative findings deserve the same documentation rigor as positive findings. The temptation is to document positive findings thoroughly (they are interesting) and dismiss negative findings with a single sentence (they are not). This asymmetry undermines the hunting program because it makes the program appear unproductive.

A well-documented negative finding answers:

  • What hypothesis was tested?
  • What data sources were examined?
  • What time window was covered?
  • What population was included?
  • What queries were run (with result counts)?
  • What was the conclusion?
  • What detection rule was produced from the hunt query?

This documentation proves the organization is proactively monitoring for the technique. It proves the technique was not found — which is different from proving it was not looked for. And it provides the raw material for the Convert step, where the hunt query becomes a permanent detection.

Try it yourself

Exercise: Write a hunt conclusion

Using the results from TH1.3 and TH1.4 exercises, write a formal conclusion. Use one of the three outcome templates:

If you found a high-confidence finding: Write the escalation package — finding, evidence, timeline, scope assessment, recommended containment.

If you found no evidence: Write the negative finding documentation — hypothesis, data sources, time window, population, conclusion, and note which query is a candidate for detection rule conversion.

If results were ambiguous: Write the inconclusive documentation — what was found, what prevented resolution, and what would resolve it.

This conclusion is the fifth section of your hunt record. It determines what happens next: IR escalation, detection rule creation, or backlog refinement.

⚠ Compliance Myth: "An inconclusive hunt is a failed hunt"

The myth: If the hunt does not produce a clear yes or no, the methodology failed. Hunts should always reach a definitive conclusion.

The reality: Real data is ambiguous. Attackers deliberately create ambiguity — their techniques are designed to look like legitimate activity. An inconclusive result that honestly documents the ambiguity, identifies the specific gap that prevented resolution, and adds a refined hypothesis to the backlog is more valuable than a false-confident conclusion that closes the investigation prematurely. Inconclusive hunts also reveal environmental limitations — missing data sources, inadequate baselines, insufficient enrichment — that, when addressed, improve the next hunt’s ability to reach a conclusion.

Extend this approach

In organizations with formal SOC reporting, hunt conclusions should be included in monthly or quarterly security operations reports. The format: "X hunts conducted. Y hypotheses confirmed (escalated to IR). Z hypotheses refuted (negative findings documented). W hypotheses inconclusive (refined and re-queued)." This reporting demonstrates proactive security activity to leadership and audit. TH15 covers hunt reporting for leadership audiences in detail.


References Used in This Subsection

  • Course cross-references: TH0.7 (value of negative findings), TH1.6 (detection rule conversion), TH15 (hunt reporting for leadership)

You're reading the free modules of this course

The full course continues with advanced topics, production detection rules, worked investigation scenarios, and deployable artifacts. Premium subscribers get access to all courses.

View Pricing See Full Syllabus