TH0.16 Module Summary

3-4 hours · Module 0 · Free

Module Summary: The Detection Gap

This module established the case for proactive threat hunting — not as a theoretical capability but as an operational necessity driven by measurable gaps in detection coverage, documented attacker dwell times, and structural limitations of detection engineering that cannot be resolved by building more rules.

What you now know

The detection coverage illusion (TH0.1). Your detection rules cover a fraction of the ATT&CK techniques relevant to your M365 environment — typically 20–40% in mature programs. You can calculate this ratio from your own Sentinel data using the queries provided. The techniques in the gap are not obscure — they include OAuth abuse, token replay, inbox rule manipulation via Graph API, and conditional access policy modification. The data exists in your logs. The rules do not.

The dwell time gap (TH0.2). The global median dwell time is 10 days. In those 10 days, an attacker establishes persistence (day 1), maps the environment and identifies high-value targets (days 2–5), and executes their objective — BEC fraud, data exfiltration, or ransomware staging (days 5–10). Hunting compresses dwell time by finding the attacker in the persistence or reconnaissance phase, before the objective is achieved. The remediation cost difference between a day 3 discovery and a day 30 discovery is measured in orders of magnitude.

The detection pyramid (TH0.3). Three layers of threat visibility: known-known (detection rules), known-unknown (hunting), unknown-unknown (anomaly detection). Most SOCs invest exclusively in the base layer. The threats designed to evade detection rules — the ones that cause the most damage — operate in the middle and top layers.

The structural limitations of detection engineering (TH0.4). Five limitations are architectural, not staffing problems: rules encode anticipation (variants evade them), rules require ingested telemetry (missing tables create blind spots), rules trade sensitivity for specificity (thresholds create hiding places), rules are static (they decay as techniques evolve), and rules generate alerts without context (they match patterns but do not build understanding). No amount of rule-writing resolves these. Hunting addresses all five.

The threat landscape driving demand (TH0.5). The current M365 threat landscape — AiTM session hijacking, living-off-the-cloud, OAuth persistence, hybrid identity exploitation, ransomware pre-encryption staging — is specifically designed to operate in the detection gap. Every technique uses legitimate credentials and standard operations. The distinction between attacker and user is behavioral context, which is hunting territory.

Hunting, IR, and detection engineering (TH0.6). The three disciplines are complementary. Detection produces rules. IR investigates when rules fire. Hunting finds what rules miss and produces new rules. Six explicit handoffs connect them. Missing handoffs break the reinforcing cycle. This course — combined with Mastering KQL, SOC Operations, and Practical IR — teaches the full triad.

The ROI of hunting (TH0.7). The hunt-to-detection pipeline is the self-funding mechanism. Each campaign costs 4–8 analyst hours and produces at least one permanent detection rule. Twelve campaigns per year cost approximately $7,680 and produce 12+ new rules, measurable coverage improvement, and documented findings. The program pays for itself the first time it discovers or compresses dwell time on one intrusion that rules would have missed.

Organizational readiness (TH0.8). Five prerequisites: sufficient data ingestion, baseline detection rules (20+), KQL proficiency, incident response process, and protected analyst time (4+ hours/week). All five must be met. Gaps should be addressed before investing hunting hours.

Common myths debunked (TH0.9). Seven myths that prevent organizations from starting or sustaining hunting programs — and the evidence-based responses that counter each one.

M365 data sources for hunting (TH0.10). Table-by-table reference for the three hunting data clusters: identity (SigninLogs, AADNonInteractive, AuditLogs), cloud apps (CloudAppEvents, EmailEvents, MicrosoftGraphActivityLogs), and endpoint (Device* tables). What each records, what each misses, and which campaigns depend on each.

The human factor (TH0.11). Five cognitive skills beyond KQL: environmental knowledge, lateral thinking, ambiguity tolerance, investigative patience, and documentation discipline. The campaign modules develop all five through structured practice.

Hunting maturity models (TH0.12). The SANS Hunting Maturity Model — HMM0 through HMM4. This course targets HMM2 (procedural, documented, producing detection rules). Most organizations start at HMM0–HMM1.

Building the leadership case (TH0.13). Three communication formats — 60-second elevator pitch, 15-minute leadership brief, one-page business case — each framed for the appropriate audience (CISO, CFO, CTO).

Hunting program metrics dashboard (TH0.14). Production-ready KQL queries for four value metrics (coverage trend, hunt-derived rules, discovery rate, dwell time compression) and three operational health metrics (cadence, backlog depth, deployment speed).

Your first 90 days (TH0.15). Week-by-week implementation roadmap: readiness assessment (weeks 1–2), ATT&CK coverage analysis and backlog creation (weeks 3–4), first three campaigns (weeks 5–8), stabilization and first quarterly report (weeks 9–12).

What comes next

TH1 teaches the Hunt Cycle — the six-step methodology that every campaign module follows. It includes the hunt documentation template you will use for every campaign.

TH2 teaches the advanced KQL patterns — time-series anomaly detection, behavioral clustering, graph semantics, frequency analysis — that the campaign modules apply to specific threat domains. If you are already comfortable with make-series and series_decompose_anomalies(), you can skim TH2 and move to TH3.

TH3 is where the work begins. The ATT&CK coverage analysis exercise produces your organization’s first real hunt backlog — the prioritized list of hypotheses that feeds every campaign you run afterward.

TH4–TH13 are the campaigns. Each is self-contained. Run them in order for the full curriculum, or jump to the campaign most relevant to your highest-priority gap from TH3.


💬

How was this module?

Your feedback helps us improve the course. One click is enough — comments are optional.

Thank you — your feedback has been received.

You're reading the free modules of this course

The full course continues with advanced topics, production detection rules, worked investigation scenarios, and deployable artifacts. Premium subscribers get access to all courses.

View Pricing See Full Syllabus