TH0.5 The Threat Landscape Driving Hunting Demand
The attackers adapted
Five years ago, a phishing email with a malicious attachment would trigger three or four detection layers: email gateway, endpoint antivirus, behavioral EDR, and SIEM correlation. The attacker who relied on basic credential phishing or commodity malware faced a stack of automated defenses that caught the attempt before it succeeded — or detected it within minutes of execution.
The detection stack got better. So the attackers changed.
The techniques that dominate the current M365 threat landscape are not the ones that detection engineering was built to stop. They are the ones specifically engineered to bypass it. Understanding what these techniques look like — and why they require hunting rather than more rules — is what separates a SOC that reads threat reports from a SOC that acts on them.
AiTM session hijacking
Adversary-in-the-Middle phishing is the defining technique of 2024–2026 M365 attacks. The attacker hosts a reverse proxy (EvilGinx, Modlishka, or a custom framework) that sits between the user and the real Microsoft login page. The user sees the real login page, enters their credentials, completes MFA, and receives a session token — but the proxy captures the token in transit.
The attacker now has a valid session token. They did not steal a password. They did not brute-force MFA. They captured a fully authenticated session that already passed every identity protection check Microsoft offers for standard MFA methods.
Why rules struggle: The sign-in that uses the stolen token is technically valid. The token has the correct MFA claim. The user agent may match a legitimate browser. If the attacker routes through a residential proxy in the same geography as the user, even the location looks plausible. Microsoft’s Identity Protection generates risk detections for some AiTM patterns (anomalousToken, tokenIssuerAnomaly) but these are probabilistic — they catch some variants and miss others. The ones they miss require hunting: looking for authentication patterns where non-interactive token refreshes come from IPs that do not match any recent interactive sign-in for that user. TH4 covers this.
Living-off-the-cloud
Just as living-off-the-land describes attackers using legitimate system binaries on endpoints, living-off-the-cloud describes attackers using legitimate M365 services for their attack objectives. SharePoint for staging and exfiltration. OneDrive for file transfer. Teams for internal phishing. Power Automate for automated data collection. Forms for credential harvesting pages hosted on a legitimate Microsoft domain.
Why rules struggle: The traffic goes to Microsoft-owned domains. The API calls use standard Graph API endpoints. The authentication is a valid user session. Every individual operation — upload a file to SharePoint, create a Power Automate flow, send a Teams message — is a normal business operation. The distinction between legitimate use and attacker abuse is not in the operation itself but in the pattern: who is doing it, when, in what sequence, and at what volume. Pattern detection at that level is hunting, not rule matching.
The rise of AI tool abuse has intensified this problem. Users paste sensitive data into ChatGPT, Gemini, and Claude through web browsers, through API integrations, and through third-party applications that connect to M365 data. The data leaves the organization through legitimate HTTPS connections to legitimate AI provider domains. No network rule fires because the destination is not malicious — the use of the destination is. TH11 addresses shadow IT and AI tool hunting.
OAuth and application-layer persistence
OAuth application abuse has become the preferred persistence mechanism for sophisticated M365 attackers. The attacker — either through consent phishing or through compromised admin credentials — grants an application access to mailbox data, files, or directory information. The application authenticates with its own credentials (client secret or certificate), independent of any user’s password or MFA configuration.
This access survives password resets. It survives session revocation. It survives MFA re-enrollment. It survives conditional access policy changes (unless the policy specifically restricts application access). The application can read email, access files, and enumerate the directory indefinitely until someone revokes its permissions.
Why rules struggle: The OAuth consent event is a standard Entra ID operation. Thousands of legitimate applications are consented in enterprise M365 tenants. A rule that alerts on every application consent would generate unmanageable noise. A rule that alerts only on specific application names or permission scopes is evaded by using a different name or requesting permissions in a less obvious combination. The distinction between a legitimate productivity application and a malicious data theft application is the application’s behavior after consent — which requires monitoring over time, not a single-event rule. TH6 covers this as a complete hunt campaign.
A quick check — how many user-consented applications exist in your environment right now with high-privilege permissions?
| |
If this returns results, you have user-consented applications with permissions sufficient to read all email or access all files. Some will be legitimate (known productivity tools). Some may not be. TH6 teaches you how to tell the difference.
Hybrid identity exploitation
Organizations that run hybrid M365 environments — with Azure AD Connect synchronizing identities between on-premises Active Directory and Entra ID — have an attack surface that spans both planes. An attacker with on-premises domain admin can compromise Azure AD Connect’s synchronization account (which has DCSync-equivalent permissions) and move to cloud Global Admin. An attacker with cloud credentials can access the VPN or use pass-through authentication to authenticate to on-premises resources.
Why rules struggle: Cloud security teams monitor cloud logs. Infrastructure teams monitor on-premises logs. The lateral movement between planes crosses a monitoring boundary. A sign-in anomaly in Entra ID followed by a VPN connection followed by RDP to a domain controller is three events in three different data sources — each individually unremarkable, collectively indicating a cloud-to-on-prem pivot. Correlating across these sources requires a hunting campaign that joins data from SigninLogs, VPN logs, and IdentityLogonEvents in a single investigation. TH10 covers this.
Ransomware pre-encryption staging
Modern ransomware operators are not scripts that encrypt on execution. They are human-operated intrusions with multi-day or multi-week dwell times. The operator gains initial access (often through AiTM or an access broker), escalates privileges, maps the environment, identifies backup systems, disables or corrupts backup processes, stages encryption tools, and then — only then — executes encryption across the environment simultaneously.
Every step before encryption is potentially detectable. VSS deletion, backup service disruption, reconnaissance tool execution, mass SMB file access, C2 beaconing. But each individual step, in isolation, may not trigger a rule — because each step looks like a legitimate admin operation (stopping a service, running a query, accessing a file share) unless you understand the sequence.
Why rules struggle: Each pre-encryption step is individually low-confidence. Stopping a backup service might be maintenance. Running nltest /dclist: might be a sysadmin checking domain controller health. Accessing multiple file shares might be a deployment script. The signal is in the sequence and timing — the same user or device performing reconnaissance, disabling backups, and staging files within a 48-hour window. Sequence correlation at this specificity requires human-driven investigation. TH12 teaches this as a complete hunt campaign.
Figure TH0.5 — Current M365 threat landscape. Each technique uses legitimate M365 operations for malicious purposes. The common thread: detection rules see individual events; hunting sees the pattern that connects them.
The common thread
Every technique described above shares one characteristic: the attacker uses legitimate credentials to perform legitimate operations through legitimate interfaces. There is no malware signature to match. There is no exploit to detect. There is no unauthorized access event. The attacker is authorized — they are using stolen credentials or compromised applications that have been granted access. The operations they perform (reading email, downloading files, creating rules, consenting to applications) are the same operations that millions of legitimate users perform every day.
This is why the current threat landscape demands hunting. The attacks are not getting more technically sophisticated in the traditional sense — they are not exploiting zero-day vulnerabilities or writing novel malware. They are getting operationally sophisticated — using the victim’s own infrastructure, the victim’s own credentials, and the victim’s own services to achieve their objectives. The distinction between the attacker and the legitimate user is behavioral context: who is doing this, why, in what sequence, and compared to what baseline.
That contextual analysis is hunting. No detection rule, regardless of how well-crafted, can perform it at the depth and nuance that a trained human analyst can — because the analysis requires understanding the business context, the user’s role, the organizational norms, and the baseline that defines “normal” for this specific environment.
Try it yourself
Exercise: Map your detection coverage against the current threat landscape
For each of the five technique categories above, answer:
AiTM session hijacking: Do you have a detection rule for token replay from new IPs? For non-interactive sign-ins from IPs not matching interactive baselines? If not → TH4 is your first priority hunt.
Living-off-the-cloud: Do you monitor data downloads from SharePoint/OneDrive at the per-user level? Do you have visibility into Power Automate flow creation? If not → TH8 and TH11 are high priority.
OAuth persistence: Do you monitor user consent events? Do you audit application permissions quarterly? Can you identify which applications accessed data after consent? If not → TH6 may produce immediate remediation actions.
Hybrid identity: Do you correlate cloud sign-in anomalies with VPN and on-premises authentication? Is Azure AD Connect account activity monitored? If not → TH10 addresses the cross-boundary gap.
Ransomware staging: Do you have detections for VSS deletion, backup service disruption, or C2 beaconing? Do those detections fire in time — before encryption? If not → TH12 covers the pre-encryption window.
The techniques where you answered "no" most frequently are your highest-priority hunts. You are building your backlog.
The myth: Defender for Endpoint (or CrowdStrike, SentinelOne, etc.) detects and responds to all endpoint threats. If the endpoint is covered, the organization is protected.
The reality: Three of the five technique categories above operate entirely in the cloud plane and never touch an endpoint: AiTM session hijacking (cloud authentication), OAuth persistence (cloud application), and living-off-the-cloud (cloud services). EDR has zero visibility into these techniques because they do not involve endpoint processes, files, or registry changes. Even hybrid identity exploitation begins in the cloud before pivoting to an endpoint. An organization that relies on EDR for all threat detection has no visibility into the attack techniques that dominate the current M365 threat landscape. Cloud hunting — using Sentinel, Defender XDR Advanced Hunting, and the cloud telemetry tables — is the only way to address them.
Extend this analysis
The threat landscape evolves continuously. The techniques described here reflect 2024–2026 attack patterns. By the time you read this, new variants will exist — new AiTM toolkit evasions, new OAuth abuse patterns, new cloud service abuse methods. The principle endures even as the specifics change: attackers adapt to your detection capability, and the adaptations are designed to operate in the gaps between your rules. Subscribe to Microsoft Security Blog, CISA advisories, and your ISAC's threat briefings. Each new report is a potential hunt hypothesis. TH1 (The Hunt Cycle) teaches the methodology for converting threat intelligence into hunt campaigns. TH3 (ATT&CK Coverage Analysis) teaches the systematic approach to identifying which new techniques your rules do not cover.
References Used in This Subsection
- Microsoft Threat Intelligence. “Midnight Blizzard conducts targeted social engineering over Microsoft Teams.” Microsoft Security Blog, August 2023. — verify URL
- Microsoft Threat Intelligence. “Storm-1567 AiTM phishing campaigns.” Microsoft Security Blog. — verify URL and report title
- MITRE ATT&CK Techniques referenced: T1557.001 (Adversary-in-the-Middle), T1539 (Steal Web Session Cookie), T1078 (Valid Accounts), T1098.003 (Additional Cloud Roles), T1071.001 (Application Layer Protocol), T1486 (Data Encrypted for Impact), T1490 (Inhibit System Recovery)
- FBI. “Internet Crime Complaint Center — 2023 Internet Crime Report.” BEC loss data.
- CrowdStrike. “2024 Threat Hunting Report.” — verify URL
You're reading the free modules of this course
The full course continues with advanced topics, production detection rules, worked investigation scenarios, and deployable artifacts. Premium subscribers get access to all courses.