Your Sentinel analytics rules are running. Defender XDR creates incidents automatically. The dashboard shows green across the board. And somewhere in your tenant, an attacker is reading email from an account they compromised nine days ago using a technique that doesn't match a single rule in your library.
Detection rules catch what you've already anticipated. Threat hunting finds what you haven't. The median dwell time for identity-based attacks in cloud environments is still measured in days, not minutes — and the reason is that most SOCs treat detection rules as complete coverage rather than a starting point.
Hunt 1: New IP addresses bypassing the 30-day baseline
Most sign-in anomaly detections look for impossible travel or known-bad IPs. Attackers who use residential proxies or VPN endpoints in the same country bypass both. This hunt compares each user's sign-in IPs from the last 7 days against their 30-day baseline and surfaces the accounts using IPs they've never authenticated from before.
let baseline = SigninLogs
| where TimeGenerated between (ago(37d) .. ago(7d))
| where ResultType == 0
| summarize KnownIPs = make_set(IPAddress, 50)
by UserPrincipalName;
SigninLogs
| where TimeGenerated > ago(7d)
| where ResultType == 0
| join kind=inner baseline on UserPrincipalName
| where not(IPAddress in (KnownIPs))
| summarize
NewIPCount = dcount(IPAddress),
NewIPs = make_set(IPAddress, 5),
FirstSeen = min(TimeGenerated)
by UserPrincipalName
| where NewIPCount >= 1
| sort by NewIPCount descWhat you're looking for: Users with new IPs, especially from hosting providers or VPN ranges. One new IP from a home broadband change is normal. Three new IPs from different hosting ASNs in the same week is a finding. Cross-reference hits against AuditLogs for MFA method registrations in the same window — an attacker who steals a session token and immediately registers their own authenticator app is a confirmed compromise, not a hunt lead.
If this returns too many results: Your environment has a lot of travel or remote workers. Tighten the filter by adding | where ConditionalAccessStatus != "success" to exclude sign-ins that passed your Conditional Access policies, or filter to privileged accounts only using a watchlist of admin UPNs.
Hunt 2: OAuth consent grants to applications you didn't deploy
Consent phishing is the attack that bypasses MFA entirely. The user clicks a link, approves an OAuth application, and the application gets persistent access to their mailbox, files, or calendar — no password, no session token, no sign-in log entry after the initial grant. The application authenticates independently using its own credentials.
AuditLogs
| where TimeGenerated > ago(14d)
| where OperationName == "Consent to application"
| extend AppName = tostring(
TargetResources[0].displayName)
| extend ConsentUser = tostring(
InitiatedBy.user.userPrincipalName)
| extend Permissions = tostring(
TargetResources[0].modifiedProperties)
| project TimeGenerated, ConsentUser, AppName,
Permissions, CorrelationId
| sort by TimeGenerated descWhat you're looking for: Applications with Mail.Read, Mail.ReadWrite, Files.Read.All, or User.Read.All permissions that were consented to by end users (not admins through a deployment process). Check the AppName — legitimate business applications have recognizable names. An app called "OneDrive Sync Service" or "Microsoft Updates" that was consented to by a single user last Tuesday is almost certainly malicious. Cross-reference with AADServicePrincipalSignInLogs to see if the application is actively authenticating.
Hunt 3: Mailbox rules that hide attacker activity
After compromising a mailbox, the first thing a BEC operator does is create inbox rules that forward email to an external address and suppress notifications. The user sees nothing — the attacker gets a copy of every inbound email. This is the persistence mechanism for financial fraud.
OfficeActivity
| where TimeGenerated > ago(14d)
| where Operation in ("New-InboxRule", "Set-InboxRule",
"Enable-InboxRule", "Set-Mailbox")
| where Parameters has_any ("ForwardingSmtpAddress",
"ForwardTo", "RedirectTo", "DeleteMessage",
"MoveToFolder")
| extend RuleUser = UserId
| project TimeGenerated, RuleUser, Operation,
Parameters, ClientIP
| sort by TimeGenerated descWhat you're looking for: Any rule that forwards to an external domain, moves messages to Deleted Items or RSS Feeds (common hiding folders), or deletes messages matching keywords like "invoice," "payment," "wire," or "transfer." The combination of a forwarding rule plus a delete-or-move rule created within minutes of each other by the same user is the classic BEC setup. Check SigninLogs for that user around the same timestamp — you'll often find the compromise sign-in.
Hunt 4: Service principals authenticating from unexpected infrastructure
Service principals and managed identities are the M365 attack surface that most SOCs don't monitor at all. An attacker who creates or hijacks a service principal has persistent access that doesn't appear in normal sign-in logs, doesn't trigger user-based detections, and survives credential resets.
AADServicePrincipalSignInLogs
| where TimeGenerated > ago(7d)
| where ResultType == 0
| summarize
SignInCount = count(),
UniqueIPs = dcount(IPAddress),
IPs = make_set(IPAddress, 10)
by ServicePrincipalName, AppId
| where UniqueIPs > 2
| sort by UniqueIPs descWhat you're looking for: Service principals authenticating from multiple IP addresses, especially if any of those IPs are hosting providers rather than your corporate ranges. A legitimate service principal authenticates from a consistent set of Azure IPs or your corporate egress. One authenticating from a DigitalOcean droplet in Amsterdam is a finding. Also look for service principals with generic names that don't match any application your team deployed — attackers name malicious apps to blend in.
Hunt 5: Data staging before exfiltration
Exfiltration rarely happens all at once. The attacker first stages data — downloading files from SharePoint, searching mailboxes for keywords, or accessing OneDrive in bulk. The staging phase is visible in audit logs if you look for volume anomalies rather than specific file names.
OfficeActivity
| where TimeGenerated > ago(7d)
| where Operation in ("FileDownloaded",
"FileSyncDownloadedFull", "FileAccessed")
| summarize
DownloadCount = count(),
UniqueFiles = dcount(OfficeObjectId),
TotalBytes = sum(toint(
OfficeWorkload == "OneDrive"))
by UserId, bin(TimeGenerated, 1h)
| where DownloadCount > 50
| sort by DownloadCount descWhat you're looking for: Users downloading 50+ files in a single hour, especially outside business hours or from an IP that doesn't match their normal location. Cross-reference against the new-IP hunt (Hunt 1) — a compromised account downloading files from a hosting provider IP is an active exfiltration. Check what files were accessed: if the downloads cluster around finance, HR, or executive folders, the attacker is targeting high-value data.
From hunt to detection
Every hunt that produces a finding should produce a detection rule. The new-IP baseline query (Hunt 1) becomes a scheduled analytics rule that fires when any user authenticates from an IP not in their 30-day baseline. The consent grant query (Hunt 2) becomes a near-real-time rule that fires on every user-initiated consent. You hunt once, then automate the detection so you never need to hunt for that specific pattern again.
This is the cycle that builds a mature detection program: hunt → find → detect → hunt the next gap. The free modules in the Practical Threat Hunting course walk through the complete methodology — from the business case for hunting through the six-step hunt cycle that every campaign follows. No account required to start.
What to do this week
- Run Hunt 2 (OAuth consent grants) first — it's the fastest to execute and the most likely to surface something your detection rules missed entirely.
- Run Hunt 3 (mailbox rules) second — if you've had any BEC activity in the last 14 days, this will find the persistence.
- For each hunt, document the query, the result count, and your assessment in a simple table. Negative findings are still findings — they prove you looked.
- Any hunt that returns zero results is a candidate for a scheduled analytics rule. Deploy it so you don't need to hunt for that pattern manually again.
- Run all five hunts on a monthly cadence. The queries take 10 minutes each. The coverage they provide lasts until the next run.
- If you want to go deeper, the Practical Threat Hunting course runs ten complete hunt campaigns across identity, email, endpoint, and cloud — with the full hypothesis-driven methodology and hunt documentation template.
Next week: Detection engineering in Sentinel — building your first custom analytics rule from a hunt finding.