In this section
TH1.9 Advanced Collection: Multi-Table Correlation Patterns
Single tables find events. Joins find attacks.
An anomalous sign-in in SigninLogs is an indicator. That sign-in followed by an inbox rule creation in CloudAppEvents followed by an OAuth consent in AuditLogs is an attack chain. The difference is the join.
Six multi-table correlation patterns appear repeatedly across the campaign modules. Learn these patterns here. Apply them in TH4 through TH13.
Pattern 1: Authentication + directory change correlation
// Pattern: anomalous sign-in → directory modification in same window
// Use case: AiTM → MFA registration, role assignment, CA policy change
let anomalyWindow = 24h;
let suspectSignIns = SigninLogs
| where TimeGenerated > ago(7d)
| where ResultType == 0
| where RiskLevelDuringSignIn in ("medium", "high")
| project SignInTime = TimeGenerated,
UserPrincipalName, IPAddress,
tostring(LocationDetails.countryOrRegion);
// Join: what directory changes did these users make?
AuditLogs
| where TimeGenerated > ago(7d)
| where Result == "success"
| extend Actor = tostring(InitiatedBy.user.userPrincipalName)
| join kind=inner suspectSignIns
on $left.Actor == $right.UserPrincipalName
| where TimeGenerated between (
SignInTime .. (SignInTime + anomalyWindow))
// Directory changes within 24h of a risky sign-in
// Relevant operations: "User registered security info",
// "Add member to role", "Update conditional access policy",
// "Consent to application"
| project TimeGenerated, Actor, OperationName,
SignInTime, IPAddress// Pattern: sign-in from new IP → cloud app activity from that session
// Use case: post-compromise activity mapping
let suspectUsers = SigninLogs
| where TimeGenerated > ago(7d)
| where ResultType == 0
// ... (your indicator filter from the hunt hypothesis)
| distinct UserPrincipalName, IPAddress;
CloudAppEvents
| where TimeGenerated > ago(7d)
| join kind=inner suspectUsers
on $left.AccountId == $right.UserPrincipalName
| where IPAddress == suspectUsers.IPAddress
// Activity from the same IP as the anomalous sign-in
// Look for: New-InboxRule, Set-Mailbox, file downloads,
// sharing operations, consent events
| summarize Actions = make_set(ActionType, 20),
ActionCount = count()
by AccountId, IPAddress
| where ActionCount > 5
// Users with 5+ actions from the anomalous IP warrant investigation// Pattern: phishing email → compromised sign-in within N hours
// Use case: confirming AiTM attack chain
let maxPhishToSignIn = 48h;
let riskySignIns = SigninLogs
| where TimeGenerated > ago(7d)
| where ResultType == 0
| where RiskLevelDuringSignIn in ("medium", "high")
| project SignInTime = TimeGenerated,
UserPrincipalName, IPAddress;
EmailEvents
| where TimeGenerated > ago(7d)
| where DeliveryAction == "Delivered"
| where ThreatTypes has_any ("Phish", "Malware")
| join kind=inner riskySignIns
on $left.RecipientEmailAddress == $right.UserPrincipalName
| where SignInTime between (
TimeGenerated .. (TimeGenerated + maxPhishToSignIn))
// Phishing email delivered → risky sign-in within 48 hours
// This correlation is the strongest single indicator of AiTM
| project EmailTime = TimeGenerated, Subject,
SenderFromAddress, SignInTime, IPAddress,
UserPrincipalName// Pattern: cloud compromise → endpoint execution
// Use case: tracking attack pivot from cloud to endpoint
let compromisedUsers = datatable(UPN:string)
["j.morrison@northgateeng.com"]; // From hunt finding
// Cloud activity
let cloudActions = CloudAppEvents
| where TimeGenerated > ago(7d)
| where AccountId in (compromisedUsers)
| project CloudTime = TimeGenerated, ActionType,
Application, AccountId;
// Endpoint activity for the same user
DeviceProcessEvents
| where TimeGenerated > ago(7d)
| where AccountUpn in (compromisedUsers)
| project EndpointTime = TimeGenerated,
FileName, ProcessCommandLine,
DeviceName, AccountUpn
// Manual correlation: examine cloud and endpoint timelines
// side by side for the compromised user
// Look for: file downloads in cloud followed by execution on endpoint
// Look for: browser-based credential theft followed by tool deployment// Pattern: interactive baseline → non-interactive anomaly
// Use case: AiTM token replay (TH4 primary detection)
let baselineWindow = 30d;
let detectionWindow = 7d;
let interactiveBaseline = SigninLogs
| where TimeGenerated between (
ago(baselineWindow + detectionWindow) .. ago(detectionWindow))
| where ResultType == 0 and IsInteractive == true
| summarize KnownIPs = make_set(IPAddress, 30)
by UserPrincipalName;
AADNonInteractiveUserSignInLogs
| where TimeGenerated > ago(detectionWindow)
| where ResultType == 0
| join kind=inner interactiveBaseline on UserPrincipalName
| where not(IPAddress in (KnownIPs))
// Token refresh from IP not seen in 30-day interactive baseline
// This is the signature of token replay
| summarize RefreshCount = count(),
Apps = make_set(AppDisplayName, 5)
by UserPrincipalName, IPAddress
| where RefreshCount > 3
// Multiple refreshes = sustained session from anomalous IP// Pattern: consent event → app sign-in → data access
// Use case: privilege escalation investigation (TH6)
let recentConsents = AuditLogs
| where TimeGenerated > ago(90d)
| where OperationName == "Consent to application"
| where Result == "success"
| extend AppId = tostring(TargetResources[0].id)
| extend ConsentedBy = tostring(
InitiatedBy.user.userPrincipalName)
| project ConsentTime = TimeGenerated,
AppId, ConsentedBy;
AADServicePrincipalSignInLogs
| where TimeGenerated > ago(90d)
| join kind=inner recentConsents on AppId
| where TimeGenerated > ConsentTime
// App sign-ins AFTER consent was granted
| summarize
SignInCount = count(),
UniqueIPs = dcount(IPAddress),
IPs = make_set(IPAddress, 5)
by AppId, ConsentedBy
// High sign-in count from many IPs post-consent = suspicious
// Low sign-in count from expected IP = likely legitimate appTry it yourself
Exercise: Run Pattern 5 in your environment
Pattern 5 (interactive → non-interactive IP mismatch) is the most immediately useful. Run it against your Sentinel workspace or Advanced Hunting.
Examine the results. How many users show non-interactive token refreshes from IPs not in their 30-day interactive baseline? Most will be legitimate (VPN, mobile network switching). Look for users where the non-interactive IP is in a different country from all interactive sign-ins — those warrant enrichment across the five dimensions from TH1.4.
The myth: Each data source can be hunted independently. Cross-table correlation is an advanced technique for later.
The reality: The most dangerous M365 attacks deliberately cross data source boundaries because they know most detection rules query a single table. AiTM phishing spans EmailEvents (delivery), SigninLogs (authentication), AuditLogs (persistence), and CloudAppEvents (post-compromise). A hunt that examines only SigninLogs finds the anomalous sign-in but misses the phishing email that confirms it and the inbox rule that proves persistence. Single-table hunting is valid for orientation queries (step 1). The indicator, enrichment, and pivot queries (steps 2–4) almost always require multi-table correlation to build sufficient confidence for escalation.
Extend these patterns
The six patterns cover the most common correlations. Campaign modules introduce technique-specific correlations that extend these — for example, TH12 (pre-ransomware activity) correlates DeviceProcessEvents with DeviceFileEvents and DeviceNetworkEvents in a time-sequenced chain (reconnaissance tool execution → backup disruption → C2 beaconing). The principle is the same: pre-filter each table, join on entity (user, device, IP), and constrain by time window. The specific tables and correlation logic change per technique.
References Used in This Subsection
- Microsoft. "KQL Join Operator." Microsoft Learn. https://learn.microsoft.com/en-us/kusto/query/join-operator
- Microsoft. "Advanced Hunting — Query Best Practices." Microsoft Learn. https://learn.microsoft.com/en-us/defender-xdr/advanced-hunting-best-practices
This detection capability integrates with the broader NE detection program — each rule contributes to the cumulative ATT&CK coverage that transforms NE from 7.2% baseline to 35%+ target coverage.
You have time for one hunt this quarter. Do you hunt for the threat in the latest advisory or for the gap in your ATT&CK coverage matrix?
Hunt the coverage gap. Advisories describe threats that are CURRENT but may not target NE. Coverage gaps describe techniques that COULD target NE and would succeed undetected. The coverage gap hunt produces a detection rule (closing the gap permanently). The advisory-driven hunt produces a point-in-time assessment (confirming the specific threat is not present today). Both are valuable — but the coverage gap hunt has a longer-lasting impact because it produces a permanent detection improvement.
You understand the detection gap and the hunt cycle.
TH0 showed you what detection rules fundamentally cannot catch. TH1 gave you the hypothesis-driven methodology that closes that gap. Now you run the hunts.
- 10 complete hunt campaigns — from hypothesis through KQL execution through finding disposition, each campaign based on a real TTP
- 70 production hunt queries — every one mapped to MITRE ATT&CK and tested against realistic telemetry
- Advanced KQL for hunting — UEBA composite risk scoring, retroactive IOC sweeps, and hunt management metrics
- Hypothesis-Driven Hunt Toolkit lab pack — 30 days of realistic M365 and endpoint telemetry with multiple attack patterns seeded in
- TH16 — Scaling hunts across a team — the operating model for a production hunt program