In this section

TH1.12 Escalation Protocols and IR Handoff

3-4 hours · Module 1 · Free
Operational Objective
A hunt finding that reaches the wrong person, reaches them too slowly, or reaches them without sufficient context wastes the dwell time compression hunting is designed to provide. The escalation is the moment hunting delivers its highest value — getting it wrong undermines everything the hunt produced. This subsection defines the escalation protocol: who to notify, what to include, how to maintain hunt continuity during IR, and the critical difference between warm and cold handoffs.
Deliverable: A documented escalation protocol for hunt findings that ensures findings reach the right person with sufficient context for immediate action.
⏱ Estimated completion: 20 minutes

Speed matters at escalation

When a hunt discovers a compromise, every hour between discovery and containment is an hour the attacker continues to operate. The analysis step (TH1.4) already established that high-confidence findings (3+ correlated enrichment dimensions) warrant immediate escalation. This subsection addresses the mechanics.

The escalation package

The IR analyst or SOC lead who receives the escalation needs enough information to take immediate action without re-running the hunt. The package:

// Generate the evidence summary for the escalation package
// Adapt entity and time window from your hunt finding
let compromisedUser = "j.morrison@northgateeng.com";
let incidentWindow = 7d;
union
    (SigninLogs | where TimeGenerated > ago(incidentWindow)
    | where UserPrincipalName == compromisedUser
    | where IPAddress == "203.0.113.47"
    | project TimeGenerated, Source = "SigninLogs",
        Detail = strcat("Sign-in from ", IPAddress,
            " (", tostring(LocationDetails.countryOrRegion), ")")
    ),
    (AuditLogs | where TimeGenerated > ago(incidentWindow)
    | where InitiatedBy.user.userPrincipalName == compromisedUser
    | where OperationName has_any ("registered security",
        "InboxRule", "Consent to application")
    | project TimeGenerated, Source = "AuditLogs",
        Detail = OperationName),
    (EmailEvents | where TimeGenerated > ago(incidentWindow)
    | where RecipientEmailAddress == compromisedUser
    | where ThreatTypes has "Phish"
    | project TimeGenerated, Source = "EmailEvents",
        Detail = strcat("Phishing: ", Subject))
| sort by TimeGenerated asc
// This produces the chronological evidence timeline
// for the escalation package
Expand for Deeper Context

1. Finding summary (2–3 sentences). What was found, for which entity, with what confidence level. "Hunt TH-2026-005 identified user j.morrison@northgateeng.com with high-confidence indicators of account compromise: sign-in from new IP (Romania) correlated with new MFA method registration and inbox rule creation, all within a 4-hour window. Phishing email delivered to the user 6 hours before the anomalous sign-in."

2. Evidence table. Timestamps, entities, indicators, and data sources — in a format the IR analyst can immediately verify.

3. Recommended containment. The immediate actions you recommend based on the technique. For AiTM: revoke all sessions, force password change, review and remove inbox rules, review and revoke OAuth consents, enable Entra ID Identity Protection user risk = High. The IR analyst may modify these based on additional context, but providing the recommended actions saves decision time.

4. Hunt context. The hypothesis, the query chain that led to the finding, and any additional suspect results that have not been fully analyzed. The IR analyst needs to know whether this finding is isolated or part of a wider pattern the hunt has not finished investigating.

Warm handoff versus cold handoff

Warm handoff: The hunter walks through the finding with the IR analyst in real time — verbal briefing, screen share, or shoulder-tap. The hunter answers questions, explains the query logic, and provides environmental context ("this IP has been seen by 3 other users this week — possible shared attacker infrastructure"). The IR analyst starts investigation with full context.

Cold handoff: The hunter writes the escalation package and submits it through the ticketing system (Sentinel incident creation, email, Slack). The IR analyst reads it and investigates independently. The hunter is available for questions but not actively guiding the investigation.

Warm handoffs are faster and produce fewer misunderstandings. Use them for high-severity findings. Cold handoffs are acceptable for medium-confidence leads that warrant investigation but are not time-critical.

Maintaining hunt continuity

When you escalate, the hunt does not stop. Two scenarios:

Scenario 1: Escalate and continue. The finding involves one user. The hunt hypothesis covers the full tenant. Escalate the finding for the one user. Continue the hunt for the remaining population. The IR response and the hunt run in parallel.

Scenario 2: Escalate and merge. The finding indicates a wide-scope compromise — multiple users from the same attacker infrastructure, or an attacker technique that implies organizational-level access (Global Admin compromise, conditional access policy weakening). Escalate and merge the remaining hunt scope into the IR investigation. The hunt becomes the IR investigation's scoping phase. The hunt queries become IR queries.

The decision between these scenarios depends on the finding's scope implications. A single compromised user account suggests Scenario 1. Compromised admin credentials suggest Scenario 2.

ESCALATION — TWO PATHS BASED ON SCOPE HIGH-CONFIDENCE FINDING ESCALATE AND CONTINUE Single entity compromised. Escalate finding → IR responds. Hunt continues for remaining population. ESCALATE AND MERGE Wide-scope compromise. Escalate finding → IR absorbs hunt. Hunt queries become IR scoping queries.

Figure TH1.12 — Two escalation scenarios. Single-entity findings allow the hunt to continue in parallel. Wide-scope findings merge the hunt into the IR investigation.

Try it yourself

Exercise: Draft an escalation package

Using the finding from TH1.3–TH1.5 exercises (or a hypothetical finding if your hunt produced no true positives), draft the complete escalation package: finding summary, evidence table (run the evidence timeline query adapted for your finding), recommended containment actions, and hunt context.

Show the package to a colleague or your SOC lead. Ask: "If you received this at 2 AM, do you have enough information to start investigating?" If the answer is no, identify what is missing and add it.

The hunt-to-IR handoff

When a hunt discovers active compromise, the transition from hunting to incident response must be immediate and structured. The hunter documents: what was found, which entities are affected, the estimated timeline of the activity, and the current confidence level. This document becomes the IR team's starting point — they should not need to re-run the hunter's queries to understand the scope. At NE, Rachel's protocol requires the hunter to produce a one-page handoff document before the IR team takes over: finding summary, affected entities (users, devices, IPs), timeline (first evidence to most recent), data sources queried, and recommended immediate containment actions.

The queries developed during this exercise become reusable templates in your personal hunting library. Parameterise the hardcoded values (user names, IP addresses, time windows) and add a header comment explaining the hypothesis each query tests. A mature hunting program maintains 50-100 parameterised query templates that any team member can execute — reducing the per-hunt preparation time from hours to minutes and ensuring consistent methodology across analysts.

The handoff document should use a standardised template that the IR team is already familiar with — ideally the same template used for alert escalation from the SOC. Consistency in format reduces the cognitive overhead for the receiving team: they know where to find the affected entities, the timeline, and the recommended actions without searching through a free-form narrative. At NE, the hunt handoff template mirrors the incident handoff template with one addition: the hunt hypothesis and the evidence that confirmed it.

⚠ Compliance Myth: "Hunt findings should be documented before escalation to avoid false alarms"

The myth: Take time to fully document the finding before escalating. False escalations damage credibility.

The reality: Documentation happens in parallel with — not before — escalation. A high-confidence finding (3+ correlated dimensions) has sufficient evidence for immediate escalation. Waiting to write a polished report while the attacker continues operating wastes the dwell time compression that justified the hunt in the first place. Escalate with the evidence you have. Document the full hunt record after containment is initiated. The escalation package (finding summary, evidence, containment recommendation) takes 10 minutes to assemble. The full hunt record takes 20 minutes after the hunt concludes. Do not confuse the two.

Extend this protocol

If your organization has a formal incident management process with defined severity levels and escalation matrices, integrate hunt escalations into that process. A high-confidence hunt finding should create a Sentinel incident (manually or through a dedicated automation rule for hunt escalations) with the appropriate severity and assignment. This ensures the finding enters the same workflow as detection-triggered incidents — with the same SLAs, the same triage process, and the same documentation requirements. TH14 covers the integration of hunting with SOC workflows in detail.


References Used in This Subsection

  • Course cross-references: TH1.4 (confidence model for escalation threshold), TH1.5 (conclusion — confirmed outcome), TH0.6 (hunting → IR handoff point 6)

Detection depth: NE-specific implementation

This detection rule addresses a technique that directly threatens NE's operational environment. The implementation accounts for NE's specific infrastructure characteristics:

Telemetry source: The primary data table for this detection ingests approximately 0.5-3.2 GB/day depending on the activity volume. At NE's scale (810 users, 865 devices, 42 servers), the event volume generates a stable baseline that statistical detection methods (percentile analysis from DE9.4) can reliably characterize. Deviations from this baseline represent either environmental changes (new applications, infrastructure modifications) or attacker activity.

Expand for Deeper Context

Threshold calibration: The threshold was selected using the percentile method: P99 of 30-day historical data establishes the upper bound of normal activity. The production threshold is set at 1.5x P99 to provide margin above normal fluctuation while maintaining detection sensitivity for attack patterns that typically generate 5-50x normal volume.

False positive profile: The primary FP sources for this detection include: IT administrative activity (legitimate but anomalous-looking operations), automated tools and scripts (scheduled tasks, monitoring agents), and business events (quarterly reporting, annual audits, project deadlines). Each FP source is addressed through the watchlist architecture (DE9.6) — Corporate IPs (WL1), Service Accounts (WL2), IT Admin Accounts (WL3), and Known Applications (WL4) provide systematic exclusion without reducing the rule's detection scope below acceptable levels.

Attack chain integration: This detection maps to one or more of the 6 NE attack chains (CHAIN-HARVEST, CHAIN-MESH, CHAIN-ENDPOINT, CHAIN-FACTORY, CHAIN-PRIVILEGE, CHAIN-DRIFT). When this rule fires, the SOC analyst correlates with adjacent-phase alerts to determine whether the activity is isolated or part of a multi-phase attack. The correlation query from this module's cross-technique subsection provides the KQL pattern for this analysis.

Response procedure: On alert, the analyst: (1) checks the entity against the watchlists — is this a known benign source? (2) checks for correlated alerts from adjacent kill chain phases within 60 minutes, (3) classifies as TP/FP/BTP using the DE9.5 decision tree, and (4) escalates to Rachel if the alert correlates with other phases (potential active attack chain).

Decision point

Your privilege escalation hunt finds that a service account was added to the Global Administrator role 4 months ago by an IT administrator. The IT admin says it was needed for a migration project that has since completed. What do you recommend?

Remove the Global Administrator role immediately and document the finding. A service account with permanent Global Admin — even if legitimately assigned — is a standing privilege escalation risk. The migration project completed 4 months ago, but the elevated permission persists. The hunt finding: 'Stale privilege assignment — service account [name] retains Global Administrator from completed migration project. Recommend: remove role, implement PIM just-in-time activation for any future temporary elevation, and add a calendar reminder for privilege review at project completion.' This finding improves NE's security posture — it is exactly the type of security debt that hunts are designed to identify.

A hunt query returns 200 results. You have 4 hours remaining in the hunt window. You can investigate 20 results thoroughly or review all 200 superficially. Which approach produces better hunt outcomes?
Review all 200 — you might miss a critical finding in the 180 you skip.
Investigate 20 thoroughly. A superficial review of 200 results produces 200 'looked at it, seemed okay' assessments that provide no investigative value and no documentation for future reference. A thorough investigation of 20 results produces: confirmed findings (true positives requiring remediation), confirmed benign patterns (documented baselines for future comparison), and inconclusive results (flagged for monitoring). Prioritise the 20 by: highest anomaly score, highest-value assets involved, and highest-risk users involved. Document why the remaining 180 were not investigated and recommend a follow-up hunt with refined query criteria to reduce the result set.
Investigate 20 — but only if they are from the most recent 24 hours.
Neither — refine the query first to reduce the result set below 50.

You understand the detection gap and the hunt cycle.

TH0 showed you what detection rules fundamentally cannot catch. TH1 gave you the hypothesis-driven methodology that closes that gap. Now you run the hunts.

  • 10 complete hunt campaigns — from hypothesis through KQL execution through finding disposition, each campaign based on a real TTP
  • 70 production hunt queries — every one mapped to MITRE ATT&CK and tested against realistic telemetry
  • Advanced KQL for hunting — UEBA composite risk scoring, retroactive IOC sweeps, and hunt management metrics
  • Hypothesis-Driven Hunt Toolkit lab pack — 30 days of realistic M365 and endpoint telemetry with multiple attack patterns seeded in
  • TH16 — Scaling hunts across a team — the operating model for a production hunt program
Unlock the full course with Premium See Full Syllabus