A hunting program without metrics is a program that cannot demonstrate value, cannot identify where it is improving, and cannot justify continued investment. This subsection provides the KQL queries for a hunting program metrics dashboard — deployable in a Sentinel workbook on day one — that tracks the four metrics from TH0.7 plus operational health indicators.
Deliverable: A set of production-ready KQL queries that track hunting program effectiveness, deployable as a Sentinel workbook or run individually for quarterly reporting.
⏱ Estimated completion: 30 minutes
Measure what matters
TH0.7 defined four metrics: detection coverage gap closure rate, hunt discovery rate, dwell time compression, and MTTD trend. This subsection provides the KQL for each, plus three operational health metrics that tell you whether the program itself is functioning.
Metric 1: Detection coverage trend
Track this quarterly. The numerator comes from your Sentinel analytics rules with ATT&CK mappings. The denominator is your relevant technique set (defined once in TH3, updated annually).
What percentage of incidents were discovered through proactive hunting versus automated detection? This metric requires consistent labeling — tag hunt-discovered incidents with “HUNT-” prefix or a “hunt-discovered” label when escalating.
Compare dwell time for hunt-discovered incidents versus rule-detected incidents. If hunting is working, hunt-discovered incidents should have shorter dwell times on average — because hunting found them before they would have been detected by other means.
Figure TH0.14 — Hunting program metrics. Four value metrics for leadership reporting. Three health metrics for internal program management.
Try it yourself
Exercise: Establish your baseline metrics
Run each of the five KQL queries in this subsection against your Sentinel workspace. Record the results as your baseline — the starting point before the hunting program begins (or the current state if hunting has already started).
If hunt-derived rules (HUNT-* naming) do not exist yet, metrics 2–5 will return empty results. That is your HMM0/HMM1 baseline. After executing your first three campaigns, re-run and compare.
If you want to deploy these as a persistent dashboard, create a Sentinel workbook with each query as a separate visualization. TH16 covers workbook creation in detail, but the queries above are ready to paste into workbook query tiles today.
⚠ Compliance Myth: "Hunting metrics are only for internal use — auditors do not care about them"
The myth: Hunting program metrics are operational data. Auditors want policies and procedures, not KQL query outputs.
The reality: Auditors want evidence that controls are operating effectively. A hunting program with documented metrics — hunts completed, coverage improved, incidents discovered, detection rules produced — provides stronger evidence of proactive monitoring than a policy that says “we will conduct threat hunting” without proof of execution. The quarterly metrics report is audit evidence. The hunt records referenced by those metrics are audit evidence. The detection rules deployed from hunts are audit evidence. Metrics are not operational overhead — they are the proof that the program exists beyond a document.
Extend this dashboard
The metrics here are the minimum viable set. Organizations with mature hunting programs often add: hypothesis source distribution (which of the six sources generates the most productive hypotheses?), false positive rate for hunt-derived rules (are hunt-based rules better tuned than non-hunt rules?), analyst skill development tracking (which analysts produce the most findings per hunt hour?), and technique recurrence (do techniques found by hunting reappear after remediation?). Add these as the program matures and the baseline metrics stabilize. Start with the seven described here.
The full course continues with advanced topics, production detection rules, worked investigation scenarios, and deployable artifacts. Premium subscribers get access to all courses.