In this section

TH1.13 The Hunt-to-Detection Pipeline: Worked End-to-End

3-4 hours · Module 1 · Free
Operational Objective
The previous subsections taught each step of the Hunt Cycle individually. This subsection puts them together — a complete worked example from threat intelligence input through hypothesis, scope, collection, analysis, conclusion, and detection rule deployment. One continuous narrative that demonstrates how the steps connect in practice.
Deliverable: A complete end-to-end example of the Hunt Cycle that you can reference as a model when executing your own campaigns.
⏱ Estimated completion: 30 minutes

This example follows the Hunt Cycle from start to finish. It is drawn from the same technique domain as TH6 (OAuth Application and Consent Abuse) but condensed to demonstrate the methodology, not the full campaign depth.

Step 1: Hypothesize

Source: Microsoft Security Blog — "Midnight Blizzard conducts targeted social engineering over Microsoft Teams" (August 2023). The report describes consent phishing via Teams messages, where the attacker persuades a user to consent to an OAuth application with high-privilege delegated permissions.

// Q1: Orientation — consent event volume
AuditLogs
| where TimeGenerated > ago(90d)
| where OperationName == "Consent to application"
| where Result == "success"
| summarize TotalConsents = count(),
    UniqueApps = dcount(tostring(TargetResources[0].displayName)),
    UniqueUsers = dcount(tostring(InitiatedBy.user.userPrincipalName))
// Result: 347 consents, 42 unique apps, 189 unique users
// Assessment: expected volume for a mid-sized org
// Q2: Indicator — high-privilege consent events
AuditLogs
| where TimeGenerated > ago(90d)
| where OperationName == "Consent to application"
| where Result == "success"
| extend AppName = tostring(TargetResources[0].displayName)
| extend ConsentedBy = tostring(InitiatedBy.user.userPrincipalName)
| extend Permissions = tostring(TargetResources[0].modifiedProperties)
| where Permissions has_any (
    "Mail.ReadWrite", "Files.ReadWrite.All",
    "Mail.Send", "Directory.ReadWrite.All")
| project TimeGenerated, ConsentedBy, AppName, Permissions
// Result: 12 consent events with high-privilege permissions
// Q3: Refinement — exclude IT department
// 12 results from Q2 → filter out IT staff
AuditLogs
| where TimeGenerated > ago(90d)
| where OperationName == "Consent to application"
| where Result == "success"
| extend AppName = tostring(TargetResources[0].displayName)
| extend ConsentedBy = tostring(InitiatedBy.user.userPrincipalName)
| extend Permissions = tostring(TargetResources[0].modifiedProperties)
| where Permissions has_any (
    "Mail.ReadWrite", "Files.ReadWrite.All",
    "Mail.Send", "Directory.ReadWrite.All")
| where ConsentedBy !endswith "@northgateeng.com"
    or ConsentedBy !in ("admin@northgateeng.com",
    "it-service@northgateeng.com")
// Exclude known admin accounts
// Result: 4 consent events from non-IT users
// Q5: Pivot — post-consent behavior for suspect app
AADServicePrincipalSignInLogs
| where TimeGenerated > ago(90d)
| where ServicePrincipalName == "DocuHelper Pro"
| summarize
    SignInCount = count(),
    UniqueIPs = dcount(IPAddress),
    IPs = make_set(IPAddress, 5),
    FirstSeen = min(TimeGenerated),
    LastSeen = max(TimeGenerated)
// Result: 847 sign-ins from 3 IPs since consent date
// IPs resolve to: US residential proxy, Netherlands VPS, Singapore VPS
// Sign-in pattern: continuous, 24/7, not human-driven
// Assessment: This is not a legitimate productivity tool
Expand for Deeper Context

Hypothesis: "If an attacker used consent phishing to gain persistent access, AuditLogs will contain 'Consent to application' operations granting Mail.ReadWrite or Files.ReadWrite.All delegated permissions from non-admin users in the last 90 days."

Quality check: Specific (names technique, data source, indicator). Testable (AuditLogs is ingested). Grounded (TI report from Microsoft). Actionable (if confirmed, revoke consent and investigate; if refuted, deploy detection rule and document).

Step 2: Scope

Data sources: AuditLogs (consent events), AADServicePrincipalSignInLogs (post-consent app behavior). Time window: 90 days (consent phishing may have occurred months ago; the application persists until revoked). Population: Full tenant (consent phishing targets any user, not just privileged accounts). Success criteria — positive: User-consented application with Mail.ReadWrite or Files.ReadWrite.All from a non-IT user, where the application is not a recognized productivity tool. Success criteria — negative: Full tenant examined across 90 days, no unrecognized high-privilege user-consented applications found.

Step 3: Collect

Query 1 — Orientation: How many consent events in 90 days?

Query 2 — Indicator: Filter to high-privilege delegated permissions.

Query 3 — Refinement: Exclude known IT/admin users.

Query 4 — Enrichment: What are these 4 applications?

The analyst examines each: 1. Grammarly — recognized productivity tool. Legitimate. 2. Adobe Acrobat — recognized. Legitimate. 3. Zoom — recognized. Legitimate. 4. "DocuHelper Pro" — not recognized. Consented by l.chen@northgateeng.com on 2026-02-15. Permissions: Mail.ReadWrite, Files.ReadWrite.All.

Query 5 — Pivot: What did "DocuHelper Pro" do after consent?

Step 4: Analyze

Enrichment across five dimensions for "DocuHelper Pro":

User context: l.chen is a marketing coordinator. No legitimate reason to consent to a mail/file access application not on the approved list.

Temporal: Consent occurred 2026-02-15 at 10:23 UTC. Check EmailEvents — a Teams message from an external sender was received at 09:45 UTC on the same day containing a consent link. The 38-minute gap between message delivery and consent is consistent with social engineering.

Geographic: Post-consent sign-ins from 3 countries (US, Netherlands, Singapore) on residential proxy and VPS infrastructure. Not l.chen's sign-in pattern.

Behavioral: 847 sign-ins over 43 days. Continuous, automated access. Mail.ReadWrite permission means the application has been reading all of l.chen's email for 43 days.

Correlated: 4 dimensions indicate compromise. Confidence: High.

Step 5: Conclude

Outcome: Confirmed. "DocuHelper Pro" is a malicious OAuth application that has been reading l.chen's email for 43 days via consented Mail.ReadWrite permissions, following a consent phishing attack delivered via Teams.

Escalation package: Finding (l.chen compromised via OAuth consent phishing, 43-day dwell time), evidence (consent event, Teams message, 847 automated sign-ins from 3 proxy IPs), recommended containment (revoke application consent, revoke l.chen's sessions, reset password, review all email sent and received during the 43-day window, check for data exfiltration or BEC activity from the account).

Dwell time compression: The application was active for 43 days. No detection rule existed for this technique. Without hunting, the compromise would have continued indefinitely until the application was discovered through a separate investigation or the attacker achieved their objective.

Step 6: Convert

The step 2 query (high-privilege consent from non-admin users), with exclusions for the three known legitimate applications (Grammarly, Adobe, Zoom), becomes a Sentinel analytics rule:

Rule name: HUNT-TH6-001: High-privilege OAuth consent by non-admin user Frequency: Every 4 hours, 6-hour lookback Exclusions: Known apps on allowlist (Grammarly, Adobe Acrobat, Zoom, Microsoft To-Do) Entity mapping: Account (ConsentedBy), Application (AppName) Severity: Medium (single indicator — consent event; escalate to High if post-consent SPN sign-in anomalies detected) Deployed: Report-only for 14 days, then promoted to production.

WORKED EXAMPLE — COMPLETE HUNT CYCLE IN ONE FLOW 1. HYPOTHESIZE TI → OAuth consent 2. SCOPE AuditLogs, 90d, full 3. COLLECT 5 queries: 347→12→4→1 4. ANALYZE 4/5 dims = HIGH conf 5. CONCLUDE CONFIRMED → IR 6. CONVERT HUNT-TH6-001 deployed OUTPUT: 1 compromise found (43-day dwell time compressed) + 1 permanent detection rule Total analyst time: ~6 hours. Detection gap for T1098.003 closed permanently.

Figure TH1.13 — Complete Hunt Cycle worked example. Six steps, five queries, one compromise found, one detection rule deployed. The technique moves from unmonitored to automated detection in a single campaign.

Try it yourself

Exercise: Run the OAuth consent hunt in your environment

Follow the five queries in this worked example against your own Sentinel workspace. Replace the fictional entity names with your environment's data.

You may find: all consented applications are legitimate (negative finding — document it, deploy the detection rule). Or you may find an unrecognized application with high-privilege permissions — investigate it using the five enrichment dimensions.

Either outcome is a successful hunt. Both produce a detection rule. This is TH6 in preview.

The pipeline's permanent value

The worked example demonstrates the complete pipeline, but the permanent value is not the specific finding — it is the reusable query pattern. The hunt query, once proven effective, becomes a scheduled analytics rule. The investigation queries become playbook steps. The containment actions become automation candidates. One successful hunt-to-detection pipeline iteration produces assets that operate automatically for years. This is the compounding return on hunting investment: each hunt that discovers a new threat pattern creates a permanent detection capability that requires no further human hunting effort for that specific pattern.

⚠ Compliance Myth: "A 43-day dwell time proves our security program is failing"

The myth: If an attacker was present for 43 days, the security program has failed. This finding reflects poorly on the team.

The reality: The 43-day dwell time reflects a detection gap — the technique had no detection rule. The hunting program just closed that gap. Before the hunt, the organization had zero visibility into this technique and the attacker would have persisted indefinitely. After the hunt, the compromise is contained AND a detection rule ensures the technique is automatically detected in the future. The hunt is the proof the security program is working — it found something that no rule could. The dwell time is the measure of what the detection gap cost. The hunt is what closed it.

Extend this example

This worked example covers a single-entity finding. TH6 (the full OAuth campaign module) extends this to a comprehensive tenant-wide audit of all OAuth applications — not just new consents, but dormant applications with high permissions that were consented months or years ago and have never been reviewed. The full campaign also examines first-party Microsoft applications with excessive permissions and service principals with credentials that have not been rotated. The worked example here is the entry point. The campaign module is the complete investigation.


References Used in This Subsection

  • Microsoft Threat Intelligence. "Midnight Blizzard conducts targeted social engineering over Microsoft Teams." Microsoft Security Blog, August 2023.
  • MITRE ATT&CK Techniques referenced: T1098.003 (Account Manipulation: Additional Cloud Roles)

NE operational context

This detection operates within NE's 18 GB/day Sentinel ingestion environment across 20 connected data sources. The rule's alert volume, TP rate, and SOC triage burden are calibrated for NE's 3-person SOC team handling 7-16 incidents per day. The detection engineer (Rachel) reviews this rule's health during the monthly tuning review (DE9.9) and adjusts thresholds, exclusions, and entity mapping as the environment evolves.

The rule's position in the overall detection library means it correlates with rules from adjacent kill chain phases — an alert from this rule gains significance when combined with alerts from earlier or later phases targeting the same entity.

Decision point

Your hunt found a confirmed threat. The finding should become a detection rule. Do you build the rule yourself or hand it to the detection engineering team?

Hand it to the detection engineering team with a complete handoff: the KQL query, the entity mapping, the expected FP patterns you observed during the hunt, the severity recommendation, and the suggested response action. The hunter's expertise is in hypothesis generation and data exploration. The detection engineer's expertise is in rule optimization, FP management, and production deployment. The handoff template ensures the detection engineer has everything they need to build a production-quality rule without re-investigating the finding.

A hunt query returns 200 results. You have 4 hours remaining in the hunt window. You can investigate 20 results thoroughly or review all 200 superficially. Which approach produces better hunt outcomes?
Review all 200 — you might miss a critical finding in the 180 you skip.
Investigate 20 thoroughly. A superficial review of 200 results produces 200 'looked at it, seemed okay' assessments that provide no investigative value and no documentation for future reference. A thorough investigation of 20 results produces: confirmed findings (true positives requiring remediation), confirmed benign patterns (documented baselines for future comparison), and inconclusive results (flagged for monitoring). Prioritise the 20 by: highest anomaly score, highest-value assets involved, and highest-risk users involved. Document why the remaining 180 were not investigated and recommend a follow-up hunt with refined query criteria to reduce the result set.
Investigate 20 — but only if they are from the most recent 24 hours.
Neither — refine the query first to reduce the result set below 50.

You understand the detection gap and the hunt cycle.

TH0 showed you what detection rules fundamentally cannot catch. TH1 gave you the hypothesis-driven methodology that closes that gap. Now you run the hunts.

  • 10 complete hunt campaigns — from hypothesis through KQL execution through finding disposition, each campaign based on a real TTP
  • 70 production hunt queries — every one mapped to MITRE ATT&CK and tested against realistic telemetry
  • Advanced KQL for hunting — UEBA composite risk scoring, retroactive IOC sweeps, and hunt management metrics
  • Hypothesis-Driven Hunt Toolkit lab pack — 30 days of realistic M365 and endpoint telemetry with multiple attack patterns seeded in
  • TH16 — Scaling hunts across a team — the operating model for a production hunt program
Unlock the full course with Premium See Full Syllabus