TH0.7 The ROI of Hunting

3-4 hours · Module 0 · Free
Operational Objective
Hunting requires analyst hours. Analyst hours cost money. Leadership will ask "what is the return?" — and "we might find bad things" is not a budget justification. This subsection builds the ROI argument in terms leadership understands: the hunt-to-detection pipeline as a self-funding mechanism, measurable metrics that demonstrate value, and the cost comparison between finding a compromise through hunting versus discovering it through external notification.
Deliverable: A quantified ROI framework for hunting that connects proactive hunt hours to detection gap closure, dwell time compression, and incident cost avoidance — with a business case template adapted for your organization.
⏱ Estimated completion: 25 minutes

The question you will be asked

You present the detection gap. Leadership understands. You present the dwell time data. Leadership is concerned. You propose dedicated hunting hours. Leadership asks: “How do we measure whether it is working?”

This is the right question. An operational activity without measurable outcomes is an expense. An operational activity with measurable outcomes is an investment. Hunting must be the latter.

The hunt-to-detection pipeline: the self-funding mechanism

Every validated hunt produces at least one detection rule. This is not optional — it is the Convert step in the Hunt Cycle (TH1). The detection rule automates the detection going forward: what you hunted for manually this month, a rule detects automatically every hour from now on.

This means hunting has a compounding return. Each hunt campaign:

  1. Searches for a specific threat in historical data (immediate value — find compromise or confirm absence)
  2. Produces a tested, tuned detection rule (permanent value — the technique is now automatically detected)
  3. Reduces the known-unknown layer by one technique (structural value — the detection gap shrinks)

After 12 months of monthly hunting campaigns, the organization has:

  • 12 completed hunt campaigns with documented findings
  • 12+ new detection rules deployed, each covering a technique that previously had no automated detection
  • A measurable reduction in the detection coverage gap (from TH0.1’s ratio)
  • Evidence of any compromises discovered through hunting (which would not have been found otherwise)

The detection rules do not expire. They continue to fire indefinitely. Year 2 of hunting does not re-hunt the same techniques — it hunts the next 12 techniques on the backlog while the first 12 are now automated. The coverage gap closes progressively.

Metrics that demonstrate value

Four metrics matter. Each is directly measurable from your Sentinel workspace.

1. Detection coverage gap closure rate. Baseline your detection coverage ratio (TH0.1) before the hunting program starts. Re-measure quarterly. The ratio should increase as hunts produce new detection rules. If you start at 25% and add 12 rules covering 12 new techniques in a year, and your relevant technique set is 100, your coverage moves from 25% to 37%. That is a 48% improvement in detection coverage directly attributable to hunting.

2. Hunt discovery rate. Of all incidents in the measurement period, what percentage were discovered through proactive hunting rather than automated alerting?

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
// Hunt discovery rate  what percentage of incidents came from hunting?
SecurityIncident
| where TimeGenerated > ago(180d)
| where Status == "Closed"
| extend DiscoverySource = iff(
    Title has "HUNT-" or tostring(Labels) has "hunt-discovered",
    "Proactive Hunting",
    "Automated Detection")
// Tag hunt-discovered incidents with "HUNT-" prefix or "hunt-discovered" label
// This requires consistent naming when escalating hunt findings to IR
| summarize IncidentCount = count() by DiscoverySource
| extend Percentage = round(100.0 * IncidentCount / toscalar(
    SecurityIncident
    | where TimeGenerated > ago(180d)
    | where Status == "Closed"
    | count), 1)
// Even 5% hunt discovery rate means 5% of incidents would have
// gone undetected without hunting  including potentially the
// highest-impact intrusions with the longest dwell times
``` An organization with zero hunting discovers 100% of incidents through rules (or external notification). An organization with an active hunting program will discover some percentage through hunts  compromises that would not have been found by any existing rule. Even a 5% hunt discovery rate means 5% of the organization's incidents would have gone undetected without hunting. At scale, those incidents may include the highest-impact intrusions.

**3. Dwell time compression.** Compare the median dwell time for incidents discovered through hunting versus incidents discovered through automated detection. If hunting discovers compromises at a median of 3 days while rules discover at a median of 12 days, hunting is compressing dwell time by 9 days per incident. Each compressed day represents avoided attacker activity — data not exfiltrated, persistence not established, lateral movement not completed.

**4. Mean time to detect (MTTD) trend.** Track MTTD over time. As hunting produces new detection rules, techniques that previously required hunting to discover become automatically detected. MTTD for those techniques drops from "whenever the next hunt runs" to "within the rule's query frequency (typically 560 minutes)." MTTD should trend downward as the detection layer expands through hunting outputs.

### The cost comparison leadership understands

The most effective ROI argument is not the metrics. It is the cost comparison.

**Cost of finding a compromise through hunting:** A hunt campaign takes one analyst 4–8 hours. At a fully loaded analyst cost of $60–$80/hour (mid-market), that is $240–$640 per campaign. If the hunt discovers a compromise, the incident response begins at day 3 (when the hunt found it) instead of day 30 (when a rule might eventually have caught it) or day 90 (when external notification might have arrived). The remediation at day 3 is contained — password resets, session revocation, persistence removal. A few hours of focused work.

**Cost of finding a compromise through external notification:** The average cost of a data breach in 2023 (IBM Cost of a Data Breach Report) was $4.45 million globally, $9.48 million in the United States. Breaches discovered by the organization's own security team cost significantly less than breaches reported by third parties or discovered by the attacker themselves (through a ransom note). The difference — the cost avoided by internal detection versus external notification — is measured in millions.

The arithmetic: 12 hunt campaigns per year at $640 each = $7,680 in analyst time. One compromise discovered through hunting instead of external notification avoids remediation costs, regulatory penalties, legal fees, and reputational damage that dwarf the hunting investment by orders of magnitude. Hunting does not need to find a compromise every month to be worth the investment. It needs to find one compromise per year that would otherwise have gone undetected — and the probability of that increases with every campaign.

### What hunting does not cost

A common objection: "We cannot afford to take analysts off the alert queue for hunting." The implicit assumption is that hunting hours come at the expense of alert triage.

This framing is wrong.

Hunting produces detection rules. Detection rules that are well-tuned (because they were built from validated hunt data, not theoretical analysis) produce fewer false positives than rules built without environmental context. Fewer false positives mean fewer wasted analyst hours on the alert queue. The hours "spent" on hunting return as hours saved on alert triage — plus the detection rule itself, which provides permanent coverage.

The calculation: one 8-hour hunt campaign that produces one detection rule. That rule fires 3 times per month with a 90% true positive rate (achievable because it was built from real data). Three high-quality alerts per month replace the zero alerts that existed before (the technique had no rule). The false positives the rule does generate (0.3 per month at 90% precision) are minimal. The analyst time consumed by the new rule is negligible.

Now compare: 8 hours of hunting produced permanent automated coverage of a technique that was previously invisible. The alternative — not hunting, not building the rule — means the technique remains undetected indefinitely. The 8 hours are not a cost. They are an investment that compounds.

<div class="diagram-container">
<svg viewBox="0 0 720 240" fill="none" xmlns="http://www.w3.org/2000/svg" width="100%">
<text x="360" y="22" fill="#64748b" font-family="sans-serif" font-size="11" text-anchor="middle" font-weight="600" letter-spacing="1">HUNT-TO-DETECTION PIPELINE — THE COMPOUNDING RETURN</text>
<!-- Hunt box -->
<rect x="30" y="50" width="140" height="60" rx="6" fill="#112436" stroke="#0891b2" stroke-width="2"/>
<text x="100" y="72" fill="#0891b2" font-family="sans-serif" font-size="9" font-weight="700" text-anchor="middle">HUNT CAMPAIGN</text>
<text x="100" y="88" fill="#94a3b8" font-family="sans-serif" font-size="7" text-anchor="middle">4-8 analyst hours</text>
<text x="100" y="99" fill="#94a3b8" font-family="sans-serif" font-size="7" text-anchor="middle">$240-640 cost</text>
<!-- Arrow -->
<line x1="170" y1="80" x2="220" y2="80" stroke="#E86A2A" stroke-width="2" marker-end="url(#arwR)"/>
<!-- Findings -->
<rect x="220" y="45" width="110" height="30" rx="5" fill="#112436" stroke="#059669" stroke-width="1.5"/>
<text x="275" y="64" fill="#059669" font-family="sans-serif" font-size="8" font-weight="600" text-anchor="middle">FINDINGS</text>
<rect x="220" y="80" width="110" height="30" rx="5" fill="#112436" stroke="#E86A2A" stroke-width="1.5"/>
<text x="275" y="99" fill="#E86A2A" font-family="sans-serif" font-size="8" font-weight="600" text-anchor="middle">DETECTION RULE</text>
<!-- Arrow from rule -->
<line x1="330" y1="95" x2="380" y2="95" stroke="#E86A2A" stroke-width="2" marker-end="url(#arwR)"/>
<!-- Permanent detection -->
<rect x="380" y="50" width="160" height="60" rx="6" fill="#112436" stroke="#059669" stroke-width="2"/>
<text x="460" y="72" fill="#059669" font-family="sans-serif" font-size="9" font-weight="700" text-anchor="middle">PERMANENT COVERAGE</text>
<text x="460" y="88" fill="#94a3b8" font-family="sans-serif" font-size="7" text-anchor="middle">Automated, 24/7, ongoing</text>
<text x="460" y="99" fill="#94a3b8" font-family="sans-serif" font-size="7" text-anchor="middle">$0 marginal cost per detection</text>
<!-- Arrow to gap reduction -->
<line x1="540" y1="80" x2="580" y2="80" stroke="#059669" stroke-width="2" marker-end="url(#arwG)"/>
<rect x="580" y="50" width="120" height="60" rx="6" fill="#112436" stroke="#059669" stroke-width="2"/>
<text x="640" y="72" fill="#059669" font-family="sans-serif" font-size="9" font-weight="700" text-anchor="middle">GAP CLOSES</text>
<text x="640" y="88" fill="#94a3b8" font-family="sans-serif" font-size="7" text-anchor="middle">Coverage ratio ↑</text>
<text x="640" y="99" fill="#94a3b8" font-family="sans-serif" font-size="7" text-anchor="middle">Dwell time ↓</text>
<!-- Bottom: compound effect -->
<rect x="120" y="140" width="480" height="45" rx="6" fill="#112436" stroke="#f59e0b" stroke-width="1.5"/>
<text x="360" y="158" fill="#f59e0b" font-family="sans-serif" font-size="9" font-weight="600" text-anchor="middle">COMPOUNDING: 12 campaigns/year → 12+ new rules → permanent gap closure</text>
<text x="360" y="173" fill="#cbd5e1" font-family="sans-serif" font-size="7" text-anchor="middle">Year 2 hunts different techniques because Year 1 techniques are now automated</text>
<!-- Cost comparison -->
<text x="360" y="210" fill="#64748b" font-family="sans-serif" font-size="9" text-anchor="middle" font-style="italic">12 hunts/year = ~$7,680 analyst time</text>
<text x="360" y="227" fill="#64748b" font-family="sans-serif" font-size="9" text-anchor="middle" font-style="italic">1 breach discovered internally vs externally = $millions in avoided cost</text>
<defs>
<marker id="arwR" viewBox="0 0 10 10" refX="10" refY="5" markerWidth="5" markerHeight="5" orient="auto"><path d="M 0 0 L 10 5 L 0 10 z" fill="#E86A2A"/></marker>
<marker id="arwG" viewBox="0 0 10 10" refX="10" refY="5" markerWidth="5" markerHeight="5" orient="auto"><path d="M 0 0 L 10 5 L 0 10 z" fill="#059669"/></marker>
</defs>
</svg>
</div>
<p class="figure-caption">Figure TH0.7 — The hunt-to-detection pipeline. Each hunt costs hours. Each rule it produces provides permanent coverage at zero marginal cost. The return compounds annually.</p>

### The negative finding has value

Not every hunt finds a compromise. Most will not. This is frequently cited as evidence that hunting is not productive: "We hunted for six months and found nothing."

The finding "nothing" is not "nothing." It is documentation that a specific threat technique was searched for across a specific data set over a specific time window and no evidence of compromise was found. That documentation:

**Reduces organizational uncertainty.** Before the hunt, you did not know whether OAuth consent abuse had occurred in the last 90 days. After the hunt, you know it has not (or at least, you know it has not left evidence in the available telemetry). That reduction in uncertainty has compliance value — it demonstrates proactive security measures — and operational value — it allows you to focus hunting resources on other techniques.

**Establishes baselines.** A hunt that finds no compromise still produces data about what normal looks like. The authentication pattern hunt that reveals no anomalies also reveals the baseline: normal sign-in volume per user, normal geographic distribution, normal device diversity. That baseline becomes the reference point for future hunts and for anomaly detection.

**Validates detection rules.** A hunt that examines a technique already covered by a detection rule and finds no evidence that the rule missed validates the rule's effectiveness. The rule is working as intended. That validation is a finding — it confirms that your detection engineering for that technique is sound.

**Satisfies audit requirements.** Regulatory frameworks and security maturity models increasingly expect evidence of proactive threat monitoring. A hunt log that documents hypotheses tested, data examined, and conclusions reached — even when the conclusion is "no evidence found" — satisfies the requirement in a way that an empty incident queue does not.

<div class="try-it">
<h4>Try it yourself</h4>
<details>
<summary>Exercise: Build your hunting ROI model</summary>
<p>Estimate the following for your organization:</p>
<p><strong>Hunt cost:</strong> Fully loaded hourly cost of an analyst × 6 hours average per hunt campaign × 12 campaigns per year = annual hunting cost: $___</p>
<p><strong>Detection rule value:</strong> If each hunt produces 1 detection rule, and each rule provides automated coverage of 1 technique previously unmonitored, then 12 hunts produce 12 techniques of new automated coverage per year. Against a relevant technique set of 100, that is a 12 percentage point improvement in detection coverage annually.</p>
<p><strong>Incident avoidance:</strong> If your organization's average incident cost (from your insurance carrier, your CFO, or the IBM Cost of a Data Breach Report for your industry) is $______, then one incident discovered through hunting instead of external notification avoids the cost differential between internal and external discovery. IBM reports that internally detected breaches cost an average of $1 million less than externally notified breaches.</p>
<p><strong>The payback equation:</strong> Annual hunting cost ÷ incident cost differential = the fraction of one incident your hunting program needs to prevent or detect early to pay for itself.</p>
<p>For most organizations, the payback is a fraction of a single incident. The hunting program pays for itself the first time it compresses dwell time on one intrusion.</p>
</details>
</div>

<div class="compliance-myth">
<div class="myth-header">⚠ Compliance Myth: "If threat hunting does not find anything, it was a waste of time"</div>
<div class="myth-body">

**The myth:** Hunting is only productive when it discovers a compromise. If the hunt finds no threats, the hours were wasted.

**The reality:** A hunt that finds no compromise produces: a negative finding that reduces uncertainty, baseline data for future comparison, detection rule validation, audit documentation, and environmental understanding that improves detection engineering. Every compliance framework that evaluates security maturity — ISO 27001, NIST CSF, SOC 2 — gives credit for proactive monitoring activities regardless of whether they find threats. The absence of findings is a positive indicator, not a failure. It becomes a failure only if you stop hunting because "nothing was found" — because the threats in the known-unknown layer are still there, and the next hunt may find them.

</div>
</div>

<div class="callout callout-info">
<h4>Extend this model</h4>
<p>If your organization has a cyber insurance policy, the hunting ROI model has an additional dimension. Many cyber insurance carriers offer premium reductions or improved coverage terms for organizations that demonstrate proactive threat monitoring. A documented hunting program — with a defined cadence, hunt logs, and metrics — may qualify for better terms. Check with your broker. The premium reduction alone may offset the hunting program's analyst cost, making the detection improvement and incident avoidance value pure upside.</p>
</div>

---

<div class="artifact-footer">
<div class="artifact-header">📋 Operational Artifact — Hunting ROI Calculator</div>
<div class="artifact-body">

> **Annual Hunting Program Cost**
> Analyst hourly rate (fully loaded): $_____ × 6 hours per campaign × 12 campaigns/year = $_____ /year
>
> **Annual Hunting Program Output**
> Detection rules produced: ~12+ new rules covering 12+ previously unmonitored techniques
> Detection coverage improvement: +_____ percentage points (from baseline ratio)
> Hunts documented: 12 with full hypothesis, data, findings, and conclusions
>
> **Incident Cost Avoidance (per incident discovered through hunting)**
> Average breach cost (your industry, IBM/Ponemon data): $_____
> Internal vs external discovery cost differential: ~$1M (IBM benchmark)
> Incidents needed to justify program: Annual cost ÷ cost differential = _____ (typically <1)
>
> **Bottom line for leadership:** The hunting program costs [annual cost]. It produces permanent detection improvements worth [12 rules × technique value]. It pays for itself the first time it discovers or compresses dwell time on one intrusion that rules would have missed. Every subsequent discovery is pure return.

</div>
</div>

### References Used in This Subsection
- IBM Security. "Cost of a Data Breach Report 2023." [https://www.ibm.com/reports/data-breach](https://www.ibm.com/reports/data-breach) — verify URL for 2023 edition
- Ponemon Institute / IBM. Internal vs external breach discovery cost differential data.
- ISO/IEC 27001:2022 — Annex A Control A.12.6 (Technical Vulnerability Management), A.5.25 (Assessment and Decision on Information Security Events)
- NIST Cybersecurity Framework 2.0 — DE.CM (Security Continuous Monitoring), DE.AE (Anomalies and Events)

You're reading the free modules of this course

The full course continues with advanced topics, production detection rules, worked investigation scenarios, and deployable artifacts. Premium subscribers get access to all courses.

View Pricing See Full Syllabus