In this section
TH1.6 Converting Hunts to Detection Rules
The query is already written
This is the part that detection engineers undervalue about hunting: the hunt already did the hard work. The query has been written, tested against real data, tuned against the environment's noise level, and validated by a human analyst who examined every result. Converting that query to a detection rule is not starting from scratch — it is deploying a pre-validated detection.
Compare this to the normal detection engineering workflow: the engineer reads a threat report, writes a query based on theoretical understanding of the technique, deploys the rule, and then discovers over the following weeks that the false positive rate is unacceptable because the query matches legitimate activity patterns the engineer did not anticipate. Tuning cycles begin. The rule is disabled, modified, re-enabled, modified again. Weeks pass before the rule is stable.
// Hunt query producing results:
SigninLogs
| where TimeGenerated > ago(1h)
// ... core hunting logic producing suspicious sign-ins ...
| project
TimeGenerated,
UserPrincipalName,
IPAddress,
tostring(LocationDetails.countryOrRegion),
AppDisplayName,
RiskLevelDuringSignIn
// Add entity mapping for Sentinel incident creation:
| extend
AccountUPNSuffix = tostring(split(UserPrincipalName, "@")[1]),
AccountName = tostring(split(UserPrincipalName, "@")[0])
// Map: Account entity → UserPrincipalName
// Map: IP entity → IPAddress
// These mappings enable automatic entity linking in incidents// Exclusions from hunt [ID] analysis:
| where IPAddress !in (knownVPNIPs)
// Exclude corporate VPN egress IPs — hunt analysis confirmed
// these produce false positives for travel users
| where UserPrincipalName !in (excludedServiceAccounts)
// Exclude service accounts with known multi-IP auth patternsTry it yourself
Exercise: Convert a hunt query to a detection rule
Take the step 2 query from the TH1.3 exercise (new IP baseline comparison). Adapt it for a Sentinel analytics rule:
1. Time window: Change from ago(30d) to ago(1h30m) for the detection window. Keep the baseline at ago(37d) to ago(7d) — or move the baseline to a watchlist if the computation is too expensive.
2. Threshold: Based on your hunt results, how many new-IP sign-ins per user in a 1-hour window would warrant an alert? Add a `where` clause filtering to only users exceeding that threshold.
3. Exclusions: From your hunt analysis, which IPs or users produced false positives? Add explicit exclusions.
4. Entity mapping: Add AccountName and AccountUPNSuffix extensions for Sentinel entity mapping.
5. Deploy: Create the analytics rule in your Sentinel workspace in report-only mode for 14 days. Monitor false positive rate before enabling alert creation.
This rule is the tangible output of your hunting exercise — a permanent detection that did not exist before the hunt.
The myth: The query worked in the hunt, so it will work as a scheduled rule. Just copy-paste it into an analytics rule and enable it.
The reality: Hunt queries and detection rules have different execution contexts. A hunt query runs interactively with a 30-day window and human review of results. A detection rule runs unattended every hour and creates incidents automatically. Without adapting the time window, thresholds, exclusions, and entity mapping, the rule will either fire on every legitimate event (creating hundreds of false-positive incidents) or time out on execution (producing no alerts at all). The conversion step is mandatory. It is also fast — 30–60 minutes — because the hunt already did the analytical work that normally takes days of tuning.
Extend this workflow
If your organization uses detection-as-code — storing analytics rules in a Git repository and deploying through CI/CD (covered in SOC Operations Module S2) — the hunt-to-detection conversion integrates naturally. The converted rule is submitted as a pull request with the hunt ID in the commit message, linking the detection rule permanently to the hunt that produced it. The PR description includes the hunt hypothesis, the false positive analysis, and the threshold justification. This traceability means any future question about why the rule exists, why the threshold is set at that level, or why specific exclusions are configured can be answered by reading the linked hunt record. Without detection-as-code, maintain this traceability through consistent naming: prefix hunt-derived rules with HUNT- followed by the campaign and sequence number.
References Used in This Subsection
- Microsoft. "Create Custom Analytics Rules in Microsoft Sentinel." Microsoft Learn. https://learn.microsoft.com/en-us/azure/sentinel/detect-threats-custom
- Microsoft. "Map Data Fields to Entities in Microsoft Sentinel." Microsoft Learn. https://learn.microsoft.com/en-us/azure/sentinel/map-data-fields-to-entities
- Course cross-references: SOC Operations S2 (detection-as-code pipeline), TH1.4 (confidence model for severity assignment)
Your hunt found a confirmed threat. The finding should become a detection rule. Do you build the rule yourself or hand it to the detection engineering team?
Hand it to the detection engineering team with a complete handoff: the KQL query, the entity mapping, the expected FP patterns you observed during the hunt, the severity recommendation, and the suggested response action. The hunter's expertise is in hypothesis generation and data exploration. The detection engineer's expertise is in rule optimization, FP management, and production deployment. The handoff template ensures the detection engineer has everything they need to build a production-quality rule without re-investigating the finding.
You understand the detection gap and the hunt cycle.
TH0 showed you what detection rules fundamentally cannot catch. TH1 gave you the hypothesis-driven methodology that closes that gap. Now you run the hunts.
- 10 complete hunt campaigns — from hypothesis through KQL execution through finding disposition, each campaign based on a real TTP
- 70 production hunt queries — every one mapped to MITRE ATT&CK and tested against realistic telemetry
- Advanced KQL for hunting — UEBA composite risk scoring, retroactive IOC sweeps, and hunt management metrics
- Hypothesis-Driven Hunt Toolkit lab pack — 30 days of realistic M365 and endpoint telemetry with multiple attack patterns seeded in
- TH16 — Scaling hunts across a team — the operating model for a production hunt program