In this section

TH0.15 Your First 90 Days: From Zero to Operating

3-4 hours · Module 0 · Free
Operational Objective
You have the business case, the methodology, the readiness assessment, and the metrics framework. This subsection converts all of it into a week-by-week implementation plan that takes you from "no hunting program" to "operating hunting program with measurable output" in 90 days. Not theoretical — a practical calendar you follow.
Deliverable: A 90-day implementation roadmap with weekly milestones, specific deliverables, and go/no-go checkpoints.
⏱ Estimated completion: 20 minutes

90 days to an operating program

The roadmap assumes you are at HMM0 or HMM1 (TH0.12). You have Sentinel deployed. You have analysts who can write KQL. You have leadership support (TH0.13) or at least implicit approval to dedicate 4+ hours per week. If any of these are missing, address them first — this roadmap starts after the prerequisites from TH0.8 are met.

Weeks 1–2: Foundation

Goal: Confirm readiness and establish baseline metrics.

Week 1:

// Week 1 summary query: your hunting program starting position
let coverageNumerator = toscalar(
    SecurityAlert
    | where TimeGenerated > ago(90d)
    | where ProviderName == "ASI Scheduled Alerts"
    | extend Techniques = tostring(
        parse_json(ExtendedProperties).["Techniques"])
    | where isnotempty(Techniques) and Techniques != "[]"
    | summarize by Techniques | count);
let dwellMetrics = SecurityIncident
| where TimeGenerated > ago(180d)
| where Status == "Closed"
| extend EarliestEvidence = todatetime(
    parse_json(tostring(AdditionalData)).firstActivityTimeUtc)
| where isnotempty(EarliestEvidence)
| extend DwellDays = datetime_diff('day', CreatedTime, EarliestEvidence)
| where DwellDays >= 0 and DwellDays < 365
| summarize Median = percentile(DwellDays, 50),
    P90 = percentile(DwellDays, 90);
print CoveredTechniques = coverageNumerator
// Combine with your ATT&CK Navigator denominator for the full ratio
// Record both numbers — they are your Day 0 baseline
Expand for Deeper Context

- Run the readiness assessment from TH0.8. Score all five prerequisites. Document gaps. - Run the data source audit from TH0.10. Confirm which tables are ingested. File requests to enable any missing critical tables (AADNonInteractiveUserSignInLogs is the most commonly missing and most impactful). - Run the detection coverage ratio query from TH0.1. Record the numerator, denominator, and ratio. This is your coverage baseline. - Run the dwell time baseline query from TH0.2. Record median, P75, P90. This is your dwell time baseline.

Week 2: - Complete the HMM assessment from TH0.12. Document current level. - Set up the metrics queries from TH0.14 — either as a Sentinel workbook or saved queries. Record baseline values (most will be zero if this is a new program). - Define your hunting cadence: how many hours per week, which analyst(s), which day(s). Block the calendar. Protect the time from the alert queue. - Read TH1 (Hunt Cycle methodology). Internalize the six steps. Print the hunt record template from TH1.7.

Weeks 3–4: ATT&CK Coverage Analysis

Goal: Build your hunt backlog.

- Complete TH3 (ATT&CK Coverage Analysis) as your first dedicated hunting activity. This exercise maps your detection rules to ATT&CK, identifies coverage gaps, and produces a prioritized hunt backlog. - The backlog should contain at least 10 hypotheses, each scored by threat relevance × data availability × detection gap severity. - Select the top 3 hypotheses for your first three campaigns. These should be from different technique domains — do not start with three authentication hunts if your gap includes OAuth, email, and privilege escalation.

Weeks 5–8: First Three Campaigns

Goal: Execute three complete hunt campaigns with full documentation and detection rule output.

Week 5–6: Campaign 1. - Execute the full Hunt Cycle from TH1 against your top-priority hypothesis. - Complete the hunt record using the template from TH1.7. - Convert the hunt query to a detection rule (TH1.6). Deploy in report-only mode. - Expected time: 4–6 hours for the hunt, 30 minutes for documentation, 30 minutes for rule conversion.

Week 7: Campaign 2. - Second hypothesis from the backlog. Same process. - The second hunt will be faster — the methodology is now familiar and the analyst has environmental context from Campaign 1.

Week 8: Campaign 3. - Third hypothesis. Same process. - After three campaigns, you have: 3 hunt records (documented), 3 detection rules (deployed in report-only), measurable data for metrics reporting.

Weeks 9–12: Stabilization and First Report

Goal: Validate detection rules, measure program output, and deliver the first quarterly report.

Week 9–10: - Review the three detection rules deployed in report-only mode. Check false positive rates. Tune thresholds and exclusions based on the 14-day validation window. Promote validated rules to production (creating incidents). - Re-run the detection coverage ratio query. The numerator should have increased by 3 (three new techniques now covered). Calculate the new ratio and the improvement.

Week 11–12: - Compile the first quarterly hunting program report using the template from TH0.14. - Report to leadership: coverage improvement, hunts completed, rules produced, any incidents discovered. - Update the backlog: add new hypotheses from any incidents investigated during the 90 days, from threat intelligence consumed, and from environmental changes observed. - Plan the next quarter: select the next 3 campaign hypotheses from the prioritized backlog.

90-DAY IMPLEMENTATION ROADMAP WEEK 1-2 WEEK 3-4 WEEK 5-8 WEEK 9-12 FOUNDATION Readiness assessment Data source audit Baseline metrics Calendar + methodology BACKLOG ATT&CK coverage analysis (TH3) 10+ hypotheses Top 3 selected FIRST THREE CAMPAIGNS 3 complete Hunt Cycle executions 3 hunt records documented 3 detection rules deployed ~18 total analyst hours STABILIZE Validate rules Measure improvement First quarterly report Plan Q2 Day 90 deliverables: 3 hunt records, 3 detection rules, measurable coverage improvement, quarterly report to leadership, prioritized backlog for the next quarter.

Figure TH0.15 — 90-day hunting program implementation roadmap. Four phases, each with specific deliverables. Total analyst time investment: approximately 30–40 hours over 12 weeks.

The Day 90 checkpoint

After 90 days, you should have:

- A readiness assessment with all prerequisites confirmed (or gaps documented and addressed) - Baseline metrics recorded (coverage ratio, dwell time, HMM level) - A prioritized hunt backlog with 10+ hypotheses - 3 completed hunt records with full documentation - 3 detection rules deployed (or in validation), each covering a technique that previously had no automated detection - Updated coverage ratio showing measurable improvement - A quarterly report delivered to leadership - A plan for the next quarter with the next 3 campaign hypotheses selected

If you have all of these, you are operating at HMM2 (TH0.12). You have a structured, documented, measurable hunting program that produces permanent detection improvement. Scale from here.

Try it yourself

Exercise: Build your 90-day calendar

Open your calendar (or a planning document) and block the following:

Week 1: 4 hours — readiness assessment, data source audit, baseline metrics.

Week 2: 4 hours — HMM assessment, metrics setup, cadence definition, TH1 methodology review.

Weeks 3–4: 6 hours — TH3 ATT&CK coverage analysis (when available), backlog creation.

Weeks 5–8: 6 hours per campaign × 3 = 18 hours — three hunt campaigns with full documentation and rule conversion.

Weeks 9–12: 4 hours — rule validation, re-measurement, quarterly report compilation.

Total: ~36 hours over 12 weeks.** That is 3 hours per week on average. Achievable alongside a full-time SOC role with protected time.

⚠ Compliance Myth: "A hunting program takes 6–12 months to show results"

The myth: Hunting programs are long-term investments that do not produce measurable results for 6–12 months. Leadership should expect a long ramp-up before seeing value.

The reality: The first hunt campaign — which can be completed in week 5 of this roadmap — produces a documented finding (positive or negative) and a detection rule. That is measurable output from week 5. By day 90, the program has produced 3 hunt records, 3 detection rules, and a quantified coverage improvement. The results are immediate and compounding. The "6–12 month" timeline describes organizational culture change and full maturity (HMM3+), not the time to first measurable output. The first output comes in weeks, not months.

Extend this roadmap

The 90-day plan covers the first quarter. Quarters 2–4 follow the same cadence: 3 campaigns per quarter from the prioritized backlog, 3 detection rules produced per quarter, quarterly metrics and report. After four quarters, you have 12 hunt records, 12+ detection rules, and a full year of metrics data showing the coverage trend. That annual dataset is the evidence for expanding the program — more hours, more analysts, or more ambitious campaigns. The first year builds the case. The second year scales it.


References Used in This Subsection

NE operational context

This detection operates within NE's 18 GB/day Sentinel ingestion environment across 20 connected data sources. The rule's alert volume, TP rate, and SOC triage burden are calibrated for NE's 3-person SOC team handling 7-16 incidents per day. The detection engineer (Rachel) reviews this rule's health during the monthly tuning review (DE9.9) and adjusts thresholds, exclusions, and entity mapping as the environment evolves.

The rule's position in the overall detection library means it correlates with rules from adjacent kill chain phases — an alert from this rule gains significance when combined with alerts from earlier or later phases targeting the same entity.

Interactive lab: walk the attack chain your detections missed

CHAIN-HARVEST succeeded because no detection rule caught the complete sequence. Walk the attack and identify where a hunt — not a detection rule — would have found the attacker. This is the case for hunting: finding threats that operate in your detection blind spots.

How the hands-on experience works

Every KQL example in this course shows the query AND the expected results. You read the query, study the output, and understand what the data reveals — without needing a separate lab environment.

The interactive labs embedded throughout the course provide the practice layer. Parameter sandboxes let you tune detection thresholds and see the impact in real time. Alert simulators present realistic triage queues. Investigation engines walk you through multi-step investigations with branching decisions. All of this runs in your browser — no setup required.

Expand for Deeper Context

If you want to run queries yourself (optional):

If you have access to a Microsoft Sentinel workspace or Defender XDR Advanced Hunting in your day job, every query in this course runs there directly. Adapt the NE examples to your environment — replace the fictional user names and IPs with your own. This is the fastest path to production value: learning the pattern here, deploying it in your environment the same day.

If you do not have access to a production environment, an M365 E5 developer tenant (free at developer.microsoft.com) provides a full Sentinel workspace with sample data. Setup takes 30-45 minutes. This is optional — the course is fully completable without it.

Decision point

You have time for one hunt this quarter. Do you hunt for the threat in the latest advisory or for the gap in your ATT&CK coverage matrix?

Hunt the coverage gap. Advisories describe threats that are CURRENT but may not target NE. Coverage gaps describe techniques that COULD target NE and would succeed undetected. The coverage gap hunt produces a detection rule (closing the gap permanently). The advisory-driven hunt produces a point-in-time assessment (confirming the specific threat is not present today). Both are valuable — but the coverage gap hunt has a longer-lasting impact because it produces a permanent detection improvement.

A hunt query returns 200 results. You have 4 hours remaining in the hunt window. You can investigate 20 results thoroughly or review all 200 superficially. Which approach produces better hunt outcomes?
Review all 200 — you might miss a critical finding in the 180 you skip.
Investigate 20 thoroughly. A superficial review of 200 results produces 200 'looked at it, seemed okay' assessments that provide no investigative value and no documentation for future reference. A thorough investigation of 20 results produces: confirmed findings (true positives requiring remediation), confirmed benign patterns (documented baselines for future comparison), and inconclusive results (flagged for monitoring). Prioritise the 20 by: highest anomaly score, highest-value assets involved, and highest-risk users involved. Document why the remaining 180 were not investigated and recommend a follow-up hunt with refined query criteria to reduce the result set.
Investigate 20 — but only if they are from the most recent 24 hours.
Neither — refine the query first to reduce the result set below 50.

You understand the detection gap and the hunt cycle.

TH0 showed you what detection rules fundamentally cannot catch. TH1 gave you the hypothesis-driven methodology that closes that gap. Now you run the hunts.

  • 10 complete hunt campaigns — from hypothesis through KQL execution through finding disposition, each campaign based on a real TTP
  • 70 production hunt queries — every one mapped to MITRE ATT&CK and tested against realistic telemetry
  • Advanced KQL for hunting — UEBA composite risk scoring, retroactive IOC sweeps, and hunt management metrics
  • Hypothesis-Driven Hunt Toolkit lab pack — 30 days of realistic M365 and endpoint telemetry with multiple attack patterns seeded in
  • TH16 — Scaling hunts across a team — the operating model for a production hunt program