In this section
TH0.8 Organizational Readiness for Hunting
Prerequisites are not optional
Hunting is a capability that builds on other capabilities. It is not the first thing you deploy. It is the thing you deploy after your detection foundation is solid enough that hunting can extend it rather than duplicate it.
An organization that cannot triage alerts effectively should not hunt. The hours are better spent fixing alert triage first. An organization that does not ingest the data sources hunting requires should not hunt. The queries will return empty results. An organization without an IR process should not hunt — because when hunting finds a compromise, the finding needs to go somewhere.
// Check which hunting-critical tables are ingested
// Run in your Sentinel workspace
let requiredTables = datatable(TableName:string)
[
"SigninLogs",
"AADNonInteractiveUserSignInLogs",
"AuditLogs",
"CloudAppEvents",
"SecurityAlert",
"DeviceProcessEvents",
"DeviceFileEvents",
"DeviceNetworkEvents",
"DeviceLogonEvents"
];
requiredTables
| extend HasData = iff(
toscalar(
union isfuzzy=true
(SigninLogs | take 1),
(print check="na")
) != "na", "✓ Ingested", "✗ Missing")
// NOTE: This pseudo-query illustrates the check.
// In practice, run: Usage | distinct DataType
// and compare against the required list.// What tables have data in your workspace?
Usage
| where TimeGenerated > ago(7d)
| summarize DataGB = sum(Quantity) / 1024
by DataType
| sort by DataGB desc
// Compare this list against the required tables above
// Missing tables = hunting blind spotsTry it yourself
Exercise: Run the readiness assessment
Score your organization against each prerequisite:
1. Data ingestion: Run the Usage query from this subsection. Check each required table against the results. Score: all required tables ingested (ready) / some missing (address gaps) / most missing (not ready).
2. Detection rules: Count your active Sentinel analytics rules. Are 20+ deployed with ATT&CK mappings? Score: yes (ready) / 10-19 rules (close, build a few more) / fewer than 10 (prioritize detection engineering).
3. KQL proficiency: Can you write a join between SigninLogs and AuditLogs without reference documentation? Can you use summarize with make_set and dcount? Score: yes (ready) / partially (Mastering KQL as a parallel track) / no (complete Mastering KQL first).
4. IR process: If hunting finds a compromised account tomorrow, do you know who to notify, what containment steps to take, and how to document the finding? Score: yes (ready) / informal (acceptable for initial hunts) / no process exists (address first).
5. Protected time: Can you block 4 hours next week for hunting with no alert queue responsibility? Score: yes (ready) / maybe (negotiate with SOC lead) / impossible (address alert workload first).
If you scored "ready" on all five, proceed to TH1. If you have gaps, the Ridgeline curriculum provides the courses to close them before investing in hunting.
The myth: Hunting requires a specialized team with dedicated headcount. Until we can hire threat hunters, we cannot hunt.
The reality: Most organizations that hunt effectively do not have dedicated hunting teams. They have SOC analysts who spend protected hours on hunting as part of a rotation — perhaps one analyst per week on "hunt duty" while the others handle the alert queue. The minimum viable hunting program is one analyst, 4 hours per week, with a prioritized backlog of hypotheses. That analyst can execute one hunt campaign per month from this course. Twelve campaigns per year, twelve detection rules produced, measurable coverage improvement. You do not need to hire before you hunt. You need to protect time and provide structure. This course provides the structure.
Extend this assessment
If your organization is considering a formal SOC maturity assessment — against the CMMI Cybermaturity Platform, the SOC-CMM, or a custom framework — the hunting readiness prerequisites in this subsection map directly to the proactive monitoring and threat detection domains of those frameworks. The readiness assessment exercise produces evidence that is useful for maturity assessments: documented data ingestion coverage, detection rule inventory with ATT&CK mapping, analyst capability assessment, and IR process documentation. If you are preparing for a maturity assessment anyway, the hunting readiness exercise produces dual-purpose documentation.
References Used in This Subsection
- Microsoft. "Microsoft Sentinel — Data Connectors." Microsoft Learn. https://learn.microsoft.com/en-us/azure/sentinel/data-connectors-reference
- Microsoft. "Usage table — Azure Monitor Logs." Microsoft Learn. https://learn.microsoft.com/en-us/azure/azure-monitor/reference/tables/usage
- Course cross-references: Mastering KQL (KQL proficiency), SOC Operations (detection engineering), Practical IR (incident response process)
You have time for one hunt this quarter. Do you hunt for the threat in the latest advisory or for the gap in your ATT&CK coverage matrix?
Hunt the coverage gap. Advisories describe threats that are CURRENT but may not target NE. Coverage gaps describe techniques that COULD target NE and would succeed undetected. The coverage gap hunt produces a detection rule (closing the gap permanently). The advisory-driven hunt produces a point-in-time assessment (confirming the specific threat is not present today). Both are valuable — but the coverage gap hunt has a longer-lasting impact because it produces a permanent detection improvement.
You understand the detection gap and the hunt cycle.
TH0 showed you what detection rules fundamentally cannot catch. TH1 gave you the hypothesis-driven methodology that closes that gap. Now you run the hunts.
- 10 complete hunt campaigns — from hypothesis through KQL execution through finding disposition, each campaign based on a real TTP
- 70 production hunt queries — every one mapped to MITRE ATT&CK and tested against realistic telemetry
- Advanced KQL for hunting — UEBA composite risk scoring, retroactive IOC sweeps, and hunt management metrics
- Hypothesis-Driven Hunt Toolkit lab pack — 30 days of realistic M365 and endpoint telemetry with multiple attack patterns seeded in
- TH16 — Scaling hunts across a team — the operating model for a production hunt program