10.8 Hunt Management and Collaboration

14-18 hours · Module 10

Hunt Management and Collaboration

Introduction

Ad-hoc hunting — running random queries when you have spare time — produces sporadic results with no accountability. Structured hunt management ensures: every hunt is documented, findings are tracked, detection improvements are made, and the organisation builds institutional knowledge about its threat landscape over time.


The hunt management workflow

Step 1: Hunt intake. A hunt starts from one of five triggers: threat intelligence advisory, MITRE ATT&CK coverage gap, incident follow-up, UEBA anomaly, or scheduled cadence rotation. Record the trigger, formulate the hypothesis, and assign the hunt to an analyst.

Step 2: Hunt execution. The assigned analyst follows the hypothesis-driven methodology (subsection 10.4): develop queries, execute, analyse results, create bookmarks for findings. Track time spent.

Step 3: Hunt review. If the hunt found suspicious activity, a second analyst reviews the findings before promotion to incident. Peer review catches analytical errors and provides a fresh perspective.

Step 4: Hunt closure. Document the outcome: Threat Confirmed (promote to incident, create analytics rule), Suspicious (schedule follow-up hunt with refined hypothesis), Benign (document the benign pattern for future reference), or No Findings (document negative result). Update the hunt log.

Step 5: Detection feedback. If the hunt confirmed a threat or identified a detection gap, feed the finding into the detection engineering lifecycle (Module 9.11). The hunting programme and the detection programme are symbiotic: hunting finds what detection misses, and detection automates what hunting discovers.


Hunt tracking system

Maintain a structured hunt log — either in a Sentinel workbook (querying hunt metadata from a custom table), a SharePoint list, or a simple spreadsheet.

Hunt record fields: Hunt ID, hypothesis, trigger (TI/MITRE gap/incident/UEBA/cadence), status (In Progress/Completed/Cancelled), assigned analyst, start date, completion date, time spent (hours), data sources queried, time range searched, finding count, outcome (Threat Confirmed/Suspicious/Benign/No Findings), bookmarks created, incidents created, analytics rules created, and notes.

Monthly hunt metrics:

Hunts completed per month — track velocity. Target: 4-8 for a solo operator, 12-20 for a team of 3.

Threat confirmation rate — percentage of hunts that confirmed a real threat. Target: 10-25% (higher rates suggest you are only hunting for easy targets; lower rates suggest your hypotheses need refinement).

Detection rules created from hunts — the tangible output. Target: 1-2 new rules per month from hunting findings.

Average time per hunt — track efficiency. Target: 2-4 hours per hypothesis hunt.


Hunt prioritisation framework

When the hypothesis backlog has 20 items and you have time for 2 hunts per month, prioritisation determines which threats you address first.

Priority 1 — Active threat intelligence. A credible threat advisory reports an active campaign targeting your industry with specific TTPs. Hunt immediately. Example: “Storm-1167 is actively targeting UK financial services with AiTM phishing. Hunt for token replay indicators in your SigninLogs.”

Priority 2 — Incident follow-up. A recent true positive incident suggests the attacker may have achieved objectives beyond what the investigation confirmed. Hunt to extend the investigation scope. Example: “The BEC incident in Account X was contained. Hunt for the same attacker IP accessing other accounts.”

Priority 3 — MITRE ATT&CK coverage gap. The coverage analysis shows high-impact techniques with no analytics rules and no previous hunts. Hunt to determine whether the technique is present and build a rule. Example: “T1136.003 (Create Cloud Account) has no coverage — hunt for accounts created outside HR provisioning.”

Priority 4 — UEBA anomaly investigation. UEBA has flagged entities with elevated investigation priority scores that have not been investigated through the incident workflow. Hunt to determine whether the behavioural anomalies represent threats. Example: “User S. Chen has an investigation priority of 8/10 with 4 anomaly types — investigate.”

Priority 5 — Scheduled cadence rotation. No specific trigger — the monthly cadence calls for a hunt. Pick the highest-priority item from the backlog that has not been hunted recently.


The hunt status board

Visualise active and planned hunts for team awareness.

Board columns: Backlog (hypotheses awaiting execution) → In Progress (currently being hunted) → Review (findings under peer review) → Completed (documented and closed).

For a solo operator, the board is simple: 1 hunt in progress, 5-10 in the backlog. For a team, the board shows each analyst’s active hunt and prevents duplicate effort (two analysts hunting the same hypothesis independently).

Implementation: Use a Sentinel workbook tile that queries hunt metadata from a custom table (HuntLog_CL), a shared Teams channel with a Planner board, or a simple table in a shared document. The tool matters less than the discipline of maintaining it.


Reporting hunting activity to management

Management needs to understand that hunting is productive, not optional exploration. Report monthly.

What to report:

Hunts completed this month: [count]. Hypothesis types: [breakdown by trigger — TI, MITRE gap, incident, UEBA, cadence].

Findings: [count] suspicious findings investigated. [count] threats confirmed. [count] analytics rules created from findings.

MITRE ATT&CK coverage improvement: “We now have detection or hunting coverage for X% of priority techniques, up from Y% last month.”

Risk reduction: “This month’s hunts confirmed our environment is clean for [list of specific threat techniques]. One hunt identified [specific finding] — containment was executed within [time].”

The narrative: Hunting finds threats that automated detection misses. Without hunting, these threats would remain undetected until the attacker achieves their objective. The hunting programme is an investment in reducing the organisation’s exposure to undetected compromise.


Hunt documentation standards

Every hunt record should be self-contained — a future analyst reading the record 6 months later should understand exactly what was hunted, how, and what was found without asking the original hunter.

The hunt narrative. Beyond the structured fields (hypothesis, queries, outcome), include a 3-5 sentence narrative: “This hunt was triggered by Microsoft TI report MSTIC-2026-003 describing Storm-1167 AiTM campaigns targeting UK financial services. We searched SigninLogs for sign-ins from the reported IPs and for token replay indicators (different IP within session). We found 3 users with sign-ins from the reported IP range — all from the same VPN provider used by our Paris office. Classified as benign. No further action required.”

This narrative transforms the hunt record from a data sheet into an intelligence document that informs future hunts.

Query documentation. For each query in the hunt: document the hypothesis it tests, the table(s) it queries, the time range, any thresholds or filters, and the expected result interpretation (“If this query returns results, it indicates [X]. If zero results, it indicates [Y].”).

Evidence documentation. For each bookmark created: document why it was bookmarked, what the hunter’s assessment is (suspicious/benign/confirmed), and the recommended next action. A bookmark without context is a data point without meaning.


Hunt knowledge base

Over time, the hunt log becomes a knowledge base. Build on it.

Hunting playbooks for recurring hypotheses. If you hunt for the same category of threat regularly (e.g., quarterly AiTM phishing hunt), create a standardised hunting playbook: pre-written queries, expected results, decision criteria, and documentation template. This enables any analyst to execute the hunt — not just the original author.

Benign pattern catalogue. Maintain a list of patterns that look suspicious but have been confirmed benign: “VPN provider X in Nigeria used by our Lagos office — not an indicator of compromise,” “Service principal SP-0042 signs in from 20 different IPs — this is expected due to Azure Functions scaling,” “User J. Morrison accesses SharePoint at 2am weekly — confirmed: automated backup process.” This catalogue prevents re-investigation of known benign patterns across multiple hunts.

Threat technique reference library. For each MITRE ATT&CK technique you have hunted: the hypothesis used, the queries developed, the typical data source, and the detection rule (if one was created). Over multiple hunts, this library becomes a comprehensive, environment-specific threat hunting guide.


Hunt retrospectives

After a hunt that confirms a threat (not routine hunts), conduct a brief retrospective.

Questions to answer: How was the hypothesis generated? (Which intelligence source?) How long did the hunt take from hypothesis to confirmation? Could the threat have been detected automatically with an analytics rule? (If yes, why was the rule not already deployed?) What would have happened if the hunt had not been conducted? (How long would the attacker have remained undetected?) What improvements should be made: to the detection library, to the hunting process, or to the data coverage?

Retrospective output: One or more of: a new analytics rule, a refined hunting query, a recommendation for additional data connectors (Module 8), an update to the hypothesis backlog, or a threat intelligence contribution to the ISAC.


Collaboration patterns

Pair hunting. Two analysts work the same hypothesis simultaneously — one writes and runs queries, the other reviews results and provides alternative interpretations. This is the hunting equivalent of pair programming — it catches blind spots and generates richer analysis.

Hunt handover. If a hunt spans multiple shifts or days, use the hunt record to enable seamless handover. The hunt record should contain: current status, queries already run (with results), bookmarks created, and the next step the incoming analyst should take.

Cross-team intelligence sharing. When a hunt finds a novel technique or an unexpected attacker behaviour, share the finding with the broader security community: internal threat intelligence distribution list, industry ISAC, or the Microsoft Sentinel community (GitHub repository). Sharing benefits the community and establishes your organisation as a contributor — which in turn attracts reciprocal intelligence.

Try it yourself

Create a hunt record for one of your hypothesis hunts from subsection 10.4. Fill in all fields: hypothesis, trigger, data sources, time range, queries executed, findings, outcome, and detection improvements. If you have completed multiple hunts across this module, compile them into a hunt log. This log becomes your institutional record of hunting activity — and the evidence that your organisation proactively hunts for threats.

What you should observe

A completed hunt record provides a complete audit trail: what was hunted, when, by whom, and what was found. Over time, the hunt log reveals patterns: which hypotheses are most productive, which data sources support the most hunts, and which MITRE techniques have been covered by hunting vs detection vs neither.


Knowledge check

Check your understanding

1. What is the primary output of a successful hunt (one that finds a real threat)?

Two outputs: (1) an incident for immediate investigation and response, and (2) a new analytics rule that detects the same pattern automatically in the future. The incident handles the current threat. The analytics rule prevents future occurrences from requiring manual hunting — closing the detection gap that necessitated the hunt.
A bookmark only
A report to management
The hunt itself is the output