In this section
TH1.13 The Hunt-to-Detection Pipeline: Worked End-to-End
The worked example: OAuth consent phishing
This example follows the Hunt Cycle from start to finish. It is drawn from the same technique domain as TH6 (OAuth Application and Consent Abuse) but condensed to demonstrate the methodology, not the full campaign depth.
Step 1: Hypothesize
Source: Microsoft Security Blog — "Midnight Blizzard conducts targeted social engineering over Microsoft Teams" (August 2023). The report describes consent phishing via Teams messages, where the attacker persuades a user to consent to an OAuth application with high-privilege delegated permissions.
// Q1: Orientation — consent event volume
AuditLogs
| where TimeGenerated > ago(90d)
| where OperationName == "Consent to application"
| where Result == "success"
| summarize TotalConsents = count(),
UniqueApps = dcount(tostring(TargetResources[0].displayName)),
UniqueUsers = dcount(tostring(InitiatedBy.user.userPrincipalName))
// Result: 347 consents, 42 unique apps, 189 unique users
// Assessment: expected volume for a mid-sized org// Q2: Indicator — high-privilege consent events
AuditLogs
| where TimeGenerated > ago(90d)
| where OperationName == "Consent to application"
| where Result == "success"
| extend AppName = tostring(TargetResources[0].displayName)
| extend ConsentedBy = tostring(InitiatedBy.user.userPrincipalName)
| extend Permissions = tostring(TargetResources[0].modifiedProperties)
| where Permissions has_any (
"Mail.ReadWrite", "Files.ReadWrite.All",
"Mail.Send", "Directory.ReadWrite.All")
| project TimeGenerated, ConsentedBy, AppName, Permissions
// Result: 12 consent events with high-privilege permissions// Q3: Refinement — exclude IT department
// 12 results from Q2 → filter out IT staff
AuditLogs
| where TimeGenerated > ago(90d)
| where OperationName == "Consent to application"
| where Result == "success"
| extend AppName = tostring(TargetResources[0].displayName)
| extend ConsentedBy = tostring(InitiatedBy.user.userPrincipalName)
| extend Permissions = tostring(TargetResources[0].modifiedProperties)
| where Permissions has_any (
"Mail.ReadWrite", "Files.ReadWrite.All",
"Mail.Send", "Directory.ReadWrite.All")
| where ConsentedBy !endswith "@northgateeng.com"
or ConsentedBy !in ("admin@northgateeng.com",
"it-service@northgateeng.com")
// Exclude known admin accounts
// Result: 4 consent events from non-IT users// Q5: Pivot — post-consent behavior for suspect app
AADServicePrincipalSignInLogs
| where TimeGenerated > ago(90d)
| where ServicePrincipalName == "DocuHelper Pro"
| summarize
SignInCount = count(),
UniqueIPs = dcount(IPAddress),
IPs = make_set(IPAddress, 5),
FirstSeen = min(TimeGenerated),
LastSeen = max(TimeGenerated)
// Result: 847 sign-ins from 3 IPs since consent date
// IPs resolve to: US residential proxy, Netherlands VPS, Singapore VPS
// Sign-in pattern: continuous, 24/7, not human-driven
// Assessment: This is not a legitimate productivity toolTry it yourself
Exercise: Run the OAuth consent hunt in your environment
Follow the five queries in this worked example against your own Sentinel workspace. Replace the fictional entity names with your environment's data.
You may find: all consented applications are legitimate (negative finding — document it, deploy the detection rule). Or you may find an unrecognized application with high-privilege permissions — investigate it using the five enrichment dimensions.
Either outcome is a successful hunt. Both produce a detection rule. This is TH6 in preview.
The pipeline's permanent value
The worked example demonstrates the complete pipeline, but the permanent value is not the specific finding — it is the reusable query pattern. The hunt query, once proven effective, becomes a scheduled analytics rule. The investigation queries become playbook steps. The containment actions become automation candidates. One successful hunt-to-detection pipeline iteration produces assets that operate automatically for years. This is the compounding return on hunting investment: each hunt that discovers a new threat pattern creates a permanent detection capability that requires no further human hunting effort for that specific pattern.
The myth: If an attacker was present for 43 days, the security program has failed. This finding reflects poorly on the team.
The reality: The 43-day dwell time reflects a detection gap — the technique had no detection rule. The hunting program just closed that gap. Before the hunt, the organization had zero visibility into this technique and the attacker would have persisted indefinitely. After the hunt, the compromise is contained AND a detection rule ensures the technique is automatically detected in the future. The hunt is the proof the security program is working — it found something that no rule could. The dwell time is the measure of what the detection gap cost. The hunt is what closed it.
Extend this example
This worked example covers a single-entity finding. TH6 (the full OAuth campaign module) extends this to a comprehensive tenant-wide audit of all OAuth applications — not just new consents, but dormant applications with high permissions that were consented months or years ago and have never been reviewed. The full campaign also examines first-party Microsoft applications with excessive permissions and service principals with credentials that have not been rotated. The worked example here is the entry point. The campaign module is the complete investigation.
References Used in This Subsection
- Microsoft Threat Intelligence. "Midnight Blizzard conducts targeted social engineering over Microsoft Teams." Microsoft Security Blog, August 2023.
- MITRE ATT&CK Techniques referenced: T1098.003 (Account Manipulation: Additional Cloud Roles)
NE operational context
This detection operates within NE's 18 GB/day Sentinel ingestion environment across 20 connected data sources. The rule's alert volume, TP rate, and SOC triage burden are calibrated for NE's 3-person SOC team handling 7-16 incidents per day. The detection engineer (Rachel) reviews this rule's health during the monthly tuning review (DE9.9) and adjusts thresholds, exclusions, and entity mapping as the environment evolves.
The rule's position in the overall detection library means it correlates with rules from adjacent kill chain phases — an alert from this rule gains significance when combined with alerts from earlier or later phases targeting the same entity.
Your hunt found a confirmed threat. The finding should become a detection rule. Do you build the rule yourself or hand it to the detection engineering team?
Hand it to the detection engineering team with a complete handoff: the KQL query, the entity mapping, the expected FP patterns you observed during the hunt, the severity recommendation, and the suggested response action. The hunter's expertise is in hypothesis generation and data exploration. The detection engineer's expertise is in rule optimization, FP management, and production deployment. The handoff template ensures the detection engineer has everything they need to build a production-quality rule without re-investigating the finding.
You understand the detection gap and the hunt cycle.
TH0 showed you what detection rules fundamentally cannot catch. TH1 gave you the hypothesis-driven methodology that closes that gap. Now you run the hunts.
- 10 complete hunt campaigns — from hypothesis through KQL execution through finding disposition, each campaign based on a real TTP
- 70 production hunt queries — every one mapped to MITRE ATT&CK and tested against realistic telemetry
- Advanced KQL for hunting — UEBA composite risk scoring, retroactive IOC sweeps, and hunt management metrics
- Hypothesis-Driven Hunt Toolkit lab pack — 30 days of realistic M365 and endpoint telemetry with multiple attack patterns seeded in
- TH16 — Scaling hunts across a team — the operating model for a production hunt program