In this section
TH0.11 The Human Factor: What Makes a Good Hunter
Tools are not the bottleneck
Every campaign module in this course provides the KQL queries. You can copy them, run them, and get results. That is not hunting. Hunting is what happens between the results and the decision — the cognitive work of interpreting what the data shows, determining what it means in the context of your environment, and making the judgment call about whether to escalate, investigate further, or close.
This cognitive work requires five skills that are not taught by KQL documentation.
Skill 1: Environmental knowledge
// How many of your closed incidents involved multi-source investigation?
// Higher numbers suggest analysts with strong lateral thinking
SecurityIncident
| where TimeGenerated > ago(180d)
| where Status == "Closed"
| extend AlertCount = toint(
parse_json(tostring(AdditionalData)).alertsCount)
| where AlertCount > 0
| summarize
SingleSourceIncidents = countif(AlertCount == 1),
MultiSourceIncidents = countif(AlertCount > 1),
AvgAlertsPerIncident = avg(AlertCount)
// Multi-source incidents required the analyst to connect evidence
// across data sources — the same skill hunting demands
// If most incidents are single-source, the team may need to develop
// lateral investigation skills before hunting produces full valueTry it yourself
Exercise: Self-assess your hunting skill profile
Rate yourself on each of the five skills (strong / developing / gap):
Environmental knowledge: Can you describe from memory what a typical day of sign-in data looks like in your environment? Do you know which IPs are your VPN egress, which users legitimately travel, which service accounts authenticate frequently?
Lateral thinking: When you investigate an alert, do you routinely pivot to adjacent data sources, or do you analyze the alert's own data in isolation?
Ambiguity tolerance: When a query result is ambiguous, do you enrich it with additional context, or do you default to "probably fine" or "probably bad"?
Investigative patience: When a hunt's first query returns noise, do you refine and continue, or do you move on?
Documentation discipline: Do you document negative findings with the same rigor as positive findings?
The areas where you rated "developing" or "gap" are the areas where the campaign modules will challenge you most — and where you will develop most from completing them.
The myth: Hunting is a specialist skill that requires dedicated certification (GCTH, CTHA, etc.) before an analyst can begin. Without certification, the analyst is not qualified to hunt.
The reality: Certifications validate knowledge. They do not create hunting capability. The skills described in this subsection — environmental knowledge, lateral thinking, ambiguity tolerance, investigative patience, documentation discipline — are developed through practice, not study. A SOC analyst with 12 months of alert triage experience, working KQL proficiency, and the structured methodology from TH1 can execute the campaigns in this course effectively. Certification is valuable for career development and for validating skills already built through practice. It is not a prerequisite for starting.
Extend this development
The most effective way to develop hunting skills is to hunt. Each campaign module in this course exercises all five skills through structured practice. The second most effective way is to review other analysts' hunt records — reading how someone else approached a hypothesis, what queries they ran, how they interpreted ambiguous results, and what conclusion they reached. If your organization has multiple analysts who hunt, periodic hunt record review sessions (TH15 covers this as peer review) accelerate skill development across the team faster than any individual practice.
References Used in This Subsection
- SANS. "GIAC Certified Threat Hunter (GCTH)." https://www.giac.org/certifications/certified-threat-hunter-gcth/ — certification reference only
- Course cross-references: TH1.3 (iterative querying), TH1.4 (enrichment dimensions), TH1.7 (hunt documentation)
You have time for one hunt this quarter. Do you hunt for the threat in the latest advisory or for the gap in your ATT&CK coverage matrix?
Hunt the coverage gap. Advisories describe threats that are CURRENT but may not target NE. Coverage gaps describe techniques that COULD target NE and would succeed undetected. The coverage gap hunt produces a detection rule (closing the gap permanently). The advisory-driven hunt produces a point-in-time assessment (confirming the specific threat is not present today). Both are valuable — but the coverage gap hunt has a longer-lasting impact because it produces a permanent detection improvement.
You understand the detection gap and the hunt cycle.
TH0 showed you what detection rules fundamentally cannot catch. TH1 gave you the hypothesis-driven methodology that closes that gap. Now you run the hunts.
- 10 complete hunt campaigns — from hypothesis through KQL execution through finding disposition, each campaign based on a real TTP
- 70 production hunt queries — every one mapped to MITRE ATT&CK and tested against realistic telemetry
- Advanced KQL for hunting — UEBA composite risk scoring, retroactive IOC sweeps, and hunt management metrics
- Hypothesis-Driven Hunt Toolkit lab pack — 30 days of realistic M365 and endpoint telemetry with multiple attack patterns seeded in
- TH16 — Scaling hunts across a team — the operating model for a production hunt program