In this section

TH0.11 The Human Factor: What Makes a Good Hunter

3-4 hours · Module 0 · Free
Operational Objective
Hunting tools are freely available. KQL is documented. The data is in Sentinel. What separates an effective hunter from an analyst running queries is not access to technology — it is judgment, pattern recognition, and the willingness to sit with ambiguous data until it resolves into understanding. This subsection defines the cognitive skills that make hunting work and identifies how to develop them within your team.
Deliverable: An understanding of the skills that hunting requires beyond KQL proficiency — and a practical development path for building those skills through the campaign modules in this course.
⏱ Estimated completion: 20 minutes

Tools are not the bottleneck

Every campaign module in this course provides the KQL queries. You can copy them, run them, and get results. That is not hunting. Hunting is what happens between the results and the decision — the cognitive work of interpreting what the data shows, determining what it means in the context of your environment, and making the judgment call about whether to escalate, investigate further, or close.

This cognitive work requires five skills that are not taught by KQL documentation.

Skill 1: Environmental knowledge

// How many of your closed incidents involved multi-source investigation?
// Higher numbers suggest analysts with strong lateral thinking
SecurityIncident
| where TimeGenerated > ago(180d)
| where Status == "Closed"
| extend AlertCount = toint(
    parse_json(tostring(AdditionalData)).alertsCount)
| where AlertCount > 0
| summarize
    SingleSourceIncidents = countif(AlertCount == 1),
    MultiSourceIncidents = countif(AlertCount > 1),
    AvgAlertsPerIncident = avg(AlertCount)
// Multi-source incidents required the analyst to connect evidence
// across data sources — the same skill hunting demands
// If most incidents are single-source, the team may need to develop
// lateral investigation skills before hunting produces full value
Expand for Deeper Context

A new IP in the sign-in logs means nothing without knowing whether it belongs to your corporate VPN, a legitimate cloud proxy, or an attacker's infrastructure. An inbox rule creation means nothing without knowing whether your organization routinely creates inbox rules via PowerShell for automated mailbox management. A process execution means nothing without knowing whether that binary is part of a deployed application or an attacker's tool.

Environmental knowledge — deep familiarity with what normal looks like in your specific organization — is the skill that converts raw query results into contextual understanding. It cannot be taught generically. It develops through operating in the environment: triaging alerts, investigating incidents, reading the data daily.

This is why experienced SOC analysts make the best hunters. They have spent months or years seeing the legitimate patterns. When the hunting query surfaces an anomaly, the analyst's accumulated knowledge of "what normal looks like here" is what enables the judgment: this is unusual, or this is Tuesday for the finance team.

If you are new to your environment, the orientation queries in each campaign module (step 1 of the collection phase) serve a dual purpose. They test the hypothesis and they teach you what the data looks like. Run them even if you think you know the environment. The data will surprise you.

Skill 2: Lateral thinking

A detection rule matches a pattern. Hunting follows connections. The AiTM session token replay appears in AADNonInteractiveUserSignInLogs — but the attacker's next step (inbox rule creation) appears in CloudAppEvents, and the phishing email that started it appears in EmailEvents. Following the attack chain across data sources requires thinking laterally: what would the attacker do next? Where would that activity appear? Which table records it?

Lateral thinking in hunting means asking "and then what?" at every stage. You found a new IP in the sign-in logs. And then what did the attacker do from that IP? You found an inbox rule. And then what was the inbox rule hiding? You found a consented application. And then what data did the application access?

Each "and then what?" generates a pivot query (step 4 from TH1.3). The ability to generate those pivots — to anticipate the attacker's next move based on the current finding — is what transforms a single-table anomaly into a multi-source investigation.

Skill 3: Tolerance for ambiguity

Most hunt results are ambiguous. The data shows something unusual but does not definitively prove compromise. A single anomalous sign-in could be an attacker or a user on holiday. A single inbox rule could be malicious or could be a user organizing their email. A single file download spike could be data exfiltration or a legitimate end-of-quarter reporting cycle.

Analysts accustomed to alert triage — where the detection rule has already made the initial judgment and the analyst confirms or dismisses — may find hunting's ambiguity uncomfortable. There is no rule that pre-filtered the results. There is no confidence score attached. The analyst is working with raw data and must build the confidence assessment themselves through the enrichment dimensions from TH1.4.

The skill is sitting with the ambiguity long enough to enrich across multiple dimensions before making a judgment. Not jumping to "this is fine" (rationalization) or "this is an attack" (confirmation bias) but methodically adding context until the evidence supports a conclusion.

Skill 4: Investigative patience

A hunt is not a single query. It is five to fifteen queries, each informed by the last. The analyst who runs the first query, gets 500 results, and says "too noisy — the technique is not here" has not hunted. They have glanced. The signal was in the 500 results — it required narrowing, enriching, and contextualizing to extract.

Investigative patience means running the next query when the first one did not produce an obvious result. It means examining outliers in the data instead of dismissing them. It means following a thread through three data sources before concluding it is legitimate. The difference between a 2-hour hunt that finds nothing and a 6-hour hunt that discovers a compromise is often the analyst's willingness to keep pulling on ambiguous threads.

Skill 5: Negative documentation discipline

The natural tendency is to document what you found. The discipline is to document what you did not find — and to treat the absence of findings as an output, not a failure.

TH0.7 established the value of negative findings. The skill is doing it consistently: completing the hunt record even when the conclusion is "no evidence found." Writing the scope, the queries, the result counts, and the conclusion for every hunt — not just the ones with exciting results. This discipline is what makes a hunting program auditable, measurable, and improvable.

FIVE HUNTING SKILLS — BEYOND KQL PROFICIENCY ENVIRONMENTAL KNOWLEDGE What does normal look like here? LATERAL THINKING And then what did the attacker do? AMBIGUITY TOLERANCE Sit with uncertainty until evidence resolves INVESTIGATIVE PATIENCE Run the next query when the first was noisy DOCUMENTATION DISCIPLINE Record what you didn't find too KQL is the language. These five skills are what you say with it. Each campaign module in this course exercises all five — that is how you develop them.

Figure TH0.11 — Five cognitive skills for effective hunting. KQL proficiency is the prerequisite. These skills are what transform query results into operational intelligence.

Try it yourself

Exercise: Self-assess your hunting skill profile

Rate yourself on each of the five skills (strong / developing / gap):

Environmental knowledge: Can you describe from memory what a typical day of sign-in data looks like in your environment? Do you know which IPs are your VPN egress, which users legitimately travel, which service accounts authenticate frequently?

Lateral thinking: When you investigate an alert, do you routinely pivot to adjacent data sources, or do you analyze the alert's own data in isolation?

Ambiguity tolerance: When a query result is ambiguous, do you enrich it with additional context, or do you default to "probably fine" or "probably bad"?

Investigative patience: When a hunt's first query returns noise, do you refine and continue, or do you move on?

Documentation discipline: Do you document negative findings with the same rigor as positive findings?

The areas where you rated "developing" or "gap" are the areas where the campaign modules will challenge you most — and where you will develop most from completing them.

⚠ Compliance Myth: "Threat hunting requires specialized certification — our analysts are not qualified"

The myth: Hunting is a specialist skill that requires dedicated certification (GCTH, CTHA, etc.) before an analyst can begin. Without certification, the analyst is not qualified to hunt.

The reality: Certifications validate knowledge. They do not create hunting capability. The skills described in this subsection — environmental knowledge, lateral thinking, ambiguity tolerance, investigative patience, documentation discipline — are developed through practice, not study. A SOC analyst with 12 months of alert triage experience, working KQL proficiency, and the structured methodology from TH1 can execute the campaigns in this course effectively. Certification is valuable for career development and for validating skills already built through practice. It is not a prerequisite for starting.

Extend this development

The most effective way to develop hunting skills is to hunt. Each campaign module in this course exercises all five skills through structured practice. The second most effective way is to review other analysts' hunt records — reading how someone else approached a hypothesis, what queries they ran, how they interpreted ambiguous results, and what conclusion they reached. If your organization has multiple analysts who hunt, periodic hunt record review sessions (TH15 covers this as peer review) accelerate skill development across the team faster than any individual practice.


References Used in This Subsection

Decision point

You have time for one hunt this quarter. Do you hunt for the threat in the latest advisory or for the gap in your ATT&CK coverage matrix?

Hunt the coverage gap. Advisories describe threats that are CURRENT but may not target NE. Coverage gaps describe techniques that COULD target NE and would succeed undetected. The coverage gap hunt produces a detection rule (closing the gap permanently). The advisory-driven hunt produces a point-in-time assessment (confirming the specific threat is not present today). Both are valuable — but the coverage gap hunt has a longer-lasting impact because it produces a permanent detection improvement.

A hunt query returns 200 results. You have 4 hours remaining in the hunt window. You can investigate 20 results thoroughly or review all 200 superficially. Which approach produces better hunt outcomes?
Review all 200 — you might miss a critical finding in the 180 you skip.
Investigate 20 thoroughly. A superficial review of 200 results produces 200 'looked at it, seemed okay' assessments that provide no investigative value and no documentation for future reference. A thorough investigation of 20 results produces: confirmed findings (true positives requiring remediation), confirmed benign patterns (documented baselines for future comparison), and inconclusive results (flagged for monitoring). Prioritise the 20 by: highest anomaly score, highest-value assets involved, and highest-risk users involved. Document why the remaining 180 were not investigated and recommend a follow-up hunt with refined query criteria to reduce the result set.
Investigate 20 — but only if they are from the most recent 24 hours.
Neither — refine the query first to reduce the result set below 50.

You understand the detection gap and the hunt cycle.

TH0 showed you what detection rules fundamentally cannot catch. TH1 gave you the hypothesis-driven methodology that closes that gap. Now you run the hunts.

  • 10 complete hunt campaigns — from hypothesis through KQL execution through finding disposition, each campaign based on a real TTP
  • 70 production hunt queries — every one mapped to MITRE ATT&CK and tested against realistic telemetry
  • Advanced KQL for hunting — UEBA composite risk scoring, retroactive IOC sweeps, and hunt management metrics
  • Hypothesis-Driven Hunt Toolkit lab pack — 30 days of realistic M365 and endpoint telemetry with multiple attack patterns seeded in
  • TH16 — Scaling hunts across a team — the operating model for a production hunt program
Unlock the full course with Premium See Full Syllabus