In this section

TH1.14 Hunt Cadence and Scheduling Models

3-4 hours · Module 1 · Free
Operational Objective
Hunting without a cadence is ad hoc. Ad hoc hunting produces inconsistent results, unmeasurable output, and eventual abandonment when alert queue pressure consumes the unprotected hours. This subsection defines three cadence models — weekly, biweekly, and monthly — and provides the scheduling mechanics that protect hunting time from the operational demands that will try to consume it.
Deliverable: A cadence model matched to your team size and SOC workload, with calendar integration and escalation protocols for when hunting time is threatened.
⏱ Estimated completion: 20 minutes

If it is not on the calendar, it does not happen

Hunting competes with alert triage for the same analyst hours. Alert triage always wins the competition because alerts are immediate and visible — an unresolved alert feels like a failure. An unexecuted hunt is invisible — nobody notices it did not happen.

The only defense is a calendar block that is treated with the same seriousness as an on-call shift. The analyst doing the hunting is not available for alert triage during those hours. If an alert fires, someone else handles it. If the team is too short-staffed to spare anyone, the hunting session is rescheduled to a specific date within the same week — not "we will do it when things calm down." Things do not calm down.

Three cadence models

// How much alert volume does your team handle?
// This informs whether you can protect hunting hours
SecurityAlert
| where TimeGenerated > ago(30d)
| summarize
    DailyAlerts = count() / 30.0,
    HighSeverity = countif(AlertSeverity == "High") / 30.0,
    MediumSeverity = countif(AlertSeverity == "Medium") / 30.0
// If DailyAlerts > 50 with a 3-person team, protecting 4 hours
// weekly is difficult — consider biweekly or monthly cadence
// If DailyAlerts < 20, weekly cadence is easily achievable
Expand for Deeper Context

Weekly cadence (4 hours/week). Best for teams of 5+ analysts where one analyst can rotate to hunting duty each week without understaffing the alert queue. Produces the fastest coverage improvement — one campaign can be completed in 1–2 weeks, yielding 24–48 campaigns per year.

Biweekly cadence (4–6 hours every two weeks). Best for teams of 3–5 analysts. The analyst does hunting every other week, with the alternating week fully dedicated to alert triage. Produces 12–24 campaigns per year — sufficient for meaningful coverage improvement.

Monthly cadence (6–8 hours/month). Best for teams of 2–3 analysts or solo practitioners. One dedicated hunting day per month, blocked in advance, protected from the queue. Produces 12 campaigns per year — the minimum viable program from TH0.7.

All three models work. The choice depends on team size, alert volume, and how much time can be reliably protected. A monthly cadence executed consistently is better than a weekly cadence that is interrupted every other week by alert surges.

Rotational versus dedicated

Rotational: Different analysts hunt on different weeks. The hunting backlog and documentation provide continuity — each analyst picks up where the last left off, guided by the backlog priority and prior hunt records.

Advantages: Develops hunting skills across the team. No single point of failure. Distributes the environmental knowledge that hunting builds.

Disadvantages: Each analyst hunts less frequently, so skill development is slower. Context switching between alert triage and hunting introduces startup cost each rotation.

Dedicated: One analyst (or a small team) is permanently assigned to hunting. They do not rotate through alert triage.

Advantages: Deepest skill development. No context switching. Fastest program maturity. The dedicated hunter builds the strongest environmental knowledge because they examine the data every day.

Disadvantages: Requires staffing that most teams cannot afford. Creates a single point of failure. The rest of the team does not develop hunting skills.

Recommendation for most organizations: Start rotational. Build hunting into every senior analyst's skill set. If the program produces enough value to justify dedicated headcount (TH0.14 metrics provide the evidence), hire or assign a dedicated hunter after 12 months of demonstrated ROI.

THREE CADENCE MODELS — MATCHED TO TEAM SIZE WEEKLY — 4 hrs/wk Team: 5+ analysts Output: 24-48 campaigns/year Fastest coverage improvement Best if alert volume allows BIWEEKLY — 4-6 hrs/2wks Team: 3-5 analysts Output: 12-24 campaigns/year Good balance Recommended starting point MONTHLY — 6-8 hrs/month Team: 2-3 analysts or solo Output: 12 campaigns/year Minimum viable program Viable for any team size A monthly cadence executed consistently outperforms a weekly cadence interrupted by alert surges. Consistency matters more than frequency.

Figure TH1.14 — Three cadence models. All produce meaningful output. Choose based on team size and alert volume, not ambition.

Try it yourself

Exercise: Select and implement your cadence

Run the alert volume query above. Based on your team size and daily alert volume, select a cadence model.

Block the first hunting session on your calendar — a specific date, a specific 4-hour block, with a specific analyst assigned. Share the calendar block with your SOC lead. If someone tries to reassign the analyst during the hunting block, the calendar entry is the evidence that hunting was scheduled and should not be interrupted without explicit rescheduling.

If the first session gets interrupted, reschedule it within the same week. If it gets interrupted three times, the issue is not hunting — it is alert workload. Address the workload (better tuning, automation, or headcount) before re-establishing the hunting cadence.

Cadence aligned to threat intelligence

Hunt scheduling should be driven by threat intelligence, not arbitrary calendars. When threat intelligence reports a new technique targeting your sector, schedule a hunt for that technique within the next cycle — do not wait for the next quarterly hunt calendar slot. NE's hybrid cadence: weekly micro-hunts (2-4 hours, single hypothesis, single data source), monthly focused hunts (full day, multi-table correlation, ATT&CK technique-aligned), and ad-hoc intelligence-driven hunts (triggered by threat reports, vendor advisories, or peer organization breach notifications). The weekly cadence maintains hunting muscle memory. The monthly cadence provides depth. The ad-hoc cadence provides agility.

The queries developed during this exercise become reusable templates in your personal hunting library. Parameterise the hardcoded values (user names, IP addresses, time windows) and add a header comment explaining the hypothesis each query tests. A mature hunting program maintains 50-100 parameterised query templates that any team member can execute — reducing the per-hunt preparation time from hours to minutes and ensuring consistent methodology across analysts.

Document the cadence in the SOC charter and the hunting program plan — both stakeholders and analysts need a single reference for scheduling expectations. The cadence must be communicated to leadership in terms they value: weekly micro-hunts prevent detection gaps between monthly hunt cycles, monthly hunts provide the depth to discover sophisticated threats that weekly time constraints cannot accommodate, and ad-hoc hunts demonstrate responsiveness to emerging threats — a capability that audit and compliance teams increasingly require evidence of.

⚠ Compliance Myth: "Daily hunting is the gold standard — anything less is insufficient"

The myth: Organizations should hunt every day. Daily hunting is the target that demonstrates mature threat operations.

The reality: Daily hunting is only viable with dedicated hunting teams that do not share alert triage responsibility. For the vast majority of organizations, daily hunting is neither achievable nor necessary. A monthly cadence that produces 12 documented campaigns, 12+ detection rules, and measurable coverage improvement per year is a high-performing hunting program. The metric that matters is not how often you hunt — it is whether hunts are completed, documented, and producing detection rules. A team that hunts monthly and completes every campaign outperforms a team that attempts daily hunting but cancels 80% of sessions due to alert pressure.

Extend this model

TH14 (the Phase 3 operations module) covers cadence management in organizational context — integrating hunting with sprint cycles, aligning hunt campaigns with threat intelligence briefing schedules, and building hunting into SOC team performance metrics. This subsection provides the practical starting point. The operations module provides the scaling framework.


References Used in This Subsection

  • Course cross-references: TH0.7 (minimum viable program metrics), TH0.8 (prerequisite 5: protected time), TH0.14 (program metrics), TH14 (Phase 3 operations)

Detection depth: NE-specific implementation

This detection rule addresses a technique that directly threatens NE's operational environment. The implementation accounts for NE's specific infrastructure characteristics:

Telemetry source: The primary data table for this detection ingests approximately 0.5-3.2 GB/day depending on the activity volume. At NE's scale (810 users, 865 devices, 42 servers), the event volume generates a stable baseline that statistical detection methods (percentile analysis from DE9.4) can reliably characterize. Deviations from this baseline represent either environmental changes (new applications, infrastructure modifications) or attacker activity.

Expand for Deeper Context

Threshold calibration: The threshold was selected using the percentile method: P99 of 30-day historical data establishes the upper bound of normal activity. The production threshold is set at 1.5x P99 to provide margin above normal fluctuation while maintaining detection sensitivity for attack patterns that typically generate 5-50x normal volume.

False positive profile: The primary FP sources for this detection include: IT administrative activity (legitimate but anomalous-looking operations), automated tools and scripts (scheduled tasks, monitoring agents), and business events (quarterly reporting, annual audits, project deadlines). Each FP source is addressed through the watchlist architecture (DE9.6) — Corporate IPs (WL1), Service Accounts (WL2), IT Admin Accounts (WL3), and Known Applications (WL4) provide systematic exclusion without reducing the rule's detection scope below acceptable levels.

Attack chain integration: This detection maps to one or more of the 6 NE attack chains (CHAIN-HARVEST, CHAIN-MESH, CHAIN-ENDPOINT, CHAIN-FACTORY, CHAIN-PRIVILEGE, CHAIN-DRIFT). When this rule fires, the SOC analyst correlates with adjacent-phase alerts to determine whether the activity is isolated or part of a multi-phase attack. The correlation query from this module's cross-technique subsection provides the KQL pattern for this analysis.

Response procedure: On alert, the analyst: (1) checks the entity against the watchlists — is this a known benign source? (2) checks for correlated alerts from adjacent kill chain phases within 60 minutes, (3) classifies as TP/FP/BTP using the DE9.5 decision tree, and (4) escalates to Rachel if the alert correlates with other phases (potential active attack chain).

Decision point

You have time for one hunt this quarter. Do you hunt for the threat in the latest advisory or for the gap in your ATT&CK coverage matrix?

Hunt the coverage gap. Advisories describe threats that are CURRENT but may not target NE. Coverage gaps describe techniques that COULD target NE and would succeed undetected. The coverage gap hunt produces a detection rule (closing the gap permanently). The advisory-driven hunt produces a point-in-time assessment (confirming the specific threat is not present today). Both are valuable — but the coverage gap hunt has a longer-lasting impact because it produces a permanent detection improvement.

A hunt query returns 200 results. You have 4 hours remaining in the hunt window. You can investigate 20 results thoroughly or review all 200 superficially. Which approach produces better hunt outcomes?
Review all 200 — you might miss a critical finding in the 180 you skip.
Investigate 20 thoroughly. A superficial review of 200 results produces 200 'looked at it, seemed okay' assessments that provide no investigative value and no documentation for future reference. A thorough investigation of 20 results produces: confirmed findings (true positives requiring remediation), confirmed benign patterns (documented baselines for future comparison), and inconclusive results (flagged for monitoring). Prioritise the 20 by: highest anomaly score, highest-value assets involved, and highest-risk users involved. Document why the remaining 180 were not investigated and recommend a follow-up hunt with refined query criteria to reduce the result set.
Investigate 20 — but only if they are from the most recent 24 hours.
Neither — refine the query first to reduce the result set below 50.

You understand the detection gap and the hunt cycle.

TH0 showed you what detection rules fundamentally cannot catch. TH1 gave you the hypothesis-driven methodology that closes that gap. Now you run the hunts.

  • 10 complete hunt campaigns — from hypothesis through KQL execution through finding disposition, each campaign based on a real TTP
  • 70 production hunt queries — every one mapped to MITRE ATT&CK and tested against realistic telemetry
  • Advanced KQL for hunting — UEBA composite risk scoring, retroactive IOC sweeps, and hunt management metrics
  • Hypothesis-Driven Hunt Toolkit lab pack — 30 days of realistic M365 and endpoint telemetry with multiple attack patterns seeded in
  • TH16 — Scaling hunts across a team — the operating model for a production hunt program
Unlock the full course with Premium See Full Syllabus