In this module

EI0.8 Real-World Identity Breaches

50-70 minutes · Module 0 · Free
Operational Objective
Abstract attack descriptions are useful for understanding techniques, but real-world breaches reveal how these techniques combine in practice — and where defensive controls actually fail. This subsection presents three composite case studies based on patterns observed in production M365 environments. Each demonstrates a different attacker motivation, a different kill chain progression, and a different set of defensive failures that this course teaches you to prevent.
Deliverable: A practical understanding of how identity attacks unfold in real environments, which defensive controls would have prevented each breach at each stage, and the specific modules that teach those controls.
⏱ Estimated completion: 18 minutes
OPERATIONAL FLOW Input Process Analyse Decide Output

Figure EI0.8 — Operational workflow from input through documented output.

Figure — Real-World Identity Breaches.

Case study 1: AiTM to BEC — the $47,000 wire fraud

Organization: Mid-size professional services firm. 400 users. M365 E3 licensing. Entra ID P1. Conditional access policies: require MFA for all users (push notification via Microsoft Authenticator). No device compliance requirements. No token protection. User consent for applications allowed.

The attack: A senior partner receives an email that appears to be a shared document notification from a client. The link leads to an EvilProxy AiTM phishing kit that presents a pixel-perfect replica of the Microsoft sign-in page. The partner enters their credentials. The proxy forwards the credentials to the real Entra ID endpoint. Entra ID challenges for MFA. The partner approves the Authenticator push notification — the application name shows "Office 365" and the location shows the correct city (because the proxy is geographically close). The proxy captures the session token.

Expand for Deeper Context

The attacker replays the session token from a VPN exit node in the same country. Conditional access evaluates the sign-in: MFA has been satisfied (by the legitimate user through the proxy), the location is within the expected country, and no device compliance is required. Access is granted. The attacker accesses Outlook Web Access and spends twenty minutes reading the partner's email, identifying an active vendor payment for $47,000 due within three days. They create an inbox rule to forward all emails from the vendor's domain to an external address and mark them as read. They then reply to the payment thread from the partner's account, requesting a change to the bank details for the upcoming payment.

The vendor's accounts payable team processes the updated bank details. The $47,000 payment goes to the attacker's account. The breach is discovered four days later when the vendor contacts the firm about a separate invoice and the firm realizes the payment was redirected.

What would have stopped this:

Stage 2 (initial access) — phishing-resistant MFA (FIDO2 or passkeys) would have prevented the AiTM proxy from capturing a usable credential. The FIDO2 key verifies the domain cryptographically — it will not authenticate against an AiTM proxy domain. This single control would have prevented the entire incident. Covered in EI2 and EI4.

Stage 2 (initial access) — token protection would have bound the session token to the partner's device. Even if the proxy captured the token, replaying it from the attacker's device would have failed. Covered in EI7.

Stage 2 (initial access) — conditional access requiring a compliant device would have blocked the token replay because the attacker's device is not enrolled in Intune. Covered in EI3.

Stage 3 (persistence) — inbox rule creation could have triggered a Sentinel analytics rule within minutes, alerting the SOC before the vendor payment was intercepted. Covered in EI13.

Total preventable loss: $47,000 plus investigation costs, legal fees, regulatory notification, and reputational damage.

Attack timeline — what the logs would show:

T+0:00 — Phishing email delivered to the partner's mailbox. Defender for Office 365 did not flag the email because the AiTM domain was newly registered and had no prior reputation data. The email contained a link to a legitimate-looking document sharing notification.

T+0:03 — The partner clicks the link and enters credentials on the AiTM proxy. The sign-in log shows a successful interactive sign-in for the partner's account. The authentication details show password + Authenticator push notification approved. The IP address is the AiTM proxy's VPN exit node, which happens to be in the same country as the user.

T+0:04 — The attacker replays the captured session token. A second sign-in event appears in the logs — this time a non-interactive sign-in from a different IP address but the same country. The conditional access evaluation shows "Success" because MFA was already satisfied by the original sign-in and no device compliance is required. Identity Protection may flag this as "unfamiliar sign-in properties" depending on the IP reputation, but the risk policy is not set to enforce any action.

T+0:06 — The attacker accesses Outlook Web Access. The OfficeActivity logs record MailboxLogin events from the attacker's IP address.

T+0:22 — The attacker creates an inbox rule. The audit log records a "New-InboxRule" operation with the rule forwarding emails containing financial keywords to an external email address. No alert fires because no Sentinel analytics rule monitors for inbox rule creation.

T+0:25 — The attacker replies to the vendor payment thread, requesting a bank detail change. The OfficeActivity logs record a Send event. The email appears to come from the partner's legitimate account.

T+4 days — The vendor processes the payment to the attacker's bank account. The breach is discovered when the vendor follows up on a separate matter.

If the organization had been monitoring the sign-in logs for the second sign-in from a different IP within minutes of the first (a classic AiTM indicator), the attacker would have been detected at T+0:04 — before any mailbox access, before the inbox rule, and before the BEC email. The detection query for this pattern is covered in EI13.

Organization: Technology company with intellectual property in SharePoint. 1,200 users. M365 E5 licensing. Entra ID P2. Conditional access: MFA required, device compliance required for desktop applications. Identity Protection enabled but risk policies set to "report only" (never promoted to enforcing). User consent for applications allowed for "verified publishers."

The attack: A developer receives an email that appears to be from a collaboration tool the company uses. The email contains a button labeled "Connect your Microsoft 365 account for enhanced integration." The link opens the Entra ID consent prompt for an application called "DevSecOps Integration Hub" — the attacker has registered this application with a display name that sounds legitimate and has obtained "verified publisher" status through a compromised publisher account.

Expand for Deeper Context

The application requests delegated permissions: Sites.Read.All, Files.Read.All, and User.Read. The developer reviews the permissions — they look reasonable for a collaboration tool integration. They click "Accept." The application now has access to read all SharePoint sites and files accessible to the developer, plus basic profile information.

The attacker uses the application's delegated access to enumerate all SharePoint sites the developer can access. They discover the product roadmap site, the engineering specifications library, and the customer contracts repository. Over the following two weeks, the application systematically reads and copies documents from these sites. The access appears as application activity in the audit logs, not as the developer's user activity — and the organization is not monitoring application consent grants or application file access patterns.

The breach is discovered three months later when the company's intellectual property appears in a competitor's product filing. Forensic investigation traces the data access back to the malicious application consent.

What would have stopped this:

Stage 3 (persistence via consent) — blocking all user consent and requiring admin consent workflow would have sent the consent request to an administrator for review. An experienced administrator would have questioned why a "DevSecOps Integration Hub" needs Sites.Read.All and Files.Read.All permissions. Covered in EI9.

Stage 3 (persistence via consent) — even with "verified publisher" filtering, the admin consent workflow provides a human review step. The verification that the publisher status was legitimate (rather than obtained through a compromised publisher account) would have caught the discrepancy. Covered in EI9.

Stage 5 (lateral movement via app access) — Defender for Cloud Apps app governance policies can monitor application access patterns and alert when an application accesses an unusual volume of files or sites. Covered in EI9 and EI16.

Stage 5 (lateral movement) — detection rules monitoring the AuditLogs for "Consent to application" events with high-risk permissions (Sites.Read.All, Files.Read.All, Mail.ReadWrite) would have flagged the consent grant immediately. Covered in EI13.

Key lesson: Device compliance and MFA did not help because the user was never compromised. The attacker never needed the developer's password or token — they tricked the developer into voluntarily granting access. Consent governance is a different control plane than authentication.

Case study 3: Insider threat through privilege creep — the departing administrator

Organization: Financial services firm. 3,000 users. M365 E5. Entra ID P2. Conditional access: comprehensive policies requiring MFA and compliant devices. Identity Protection: risk policies enforcing. PIM: enabled for Global Admin only. Standing assignments for Exchange Admin, SharePoint Admin, User Admin, and Application Admin.

The attack: A systems administrator who has been with the organization for six years submits their two-week resignation notice. Over the years, they have accumulated standing assignments to Exchange Administrator, SharePoint Administrator, and Application Administrator — roles granted for specific projects that were never revoked when the projects ended. They also have delegated admin access to several service principals used for automation.

Expand for Deeper Context

During their notice period, the administrator creates a new application registration with Mail.ReadWrite and Files.ReadWrite.All application permissions — permissions that do not require user context. They add a client secret to this application with a two-year expiration. They also add a client secret to an existing service principal used for the company's CRM integration, which has User.ReadWrite.All permissions. Both actions appear as routine administrative activity in the audit log.

After their departure, the administrator uses the application credentials to access the organization's email and files remotely. Because the access uses application permissions (not delegated permissions), there are no user sign-in events — the activity appears only in the service principal sign-in logs, which the organization does not monitor. The access continues for four months until a routine security assessment discovers the unauthorized application.

What would have stopped this:

Stage 3-4 (persistence and escalation) — PIM for all privileged roles, not just Global Admin, would have meant the administrator's Exchange Admin, SharePoint Admin, and Application Admin permissions were eligible rather than standing. They would have needed to activate each role with justification, and the activation would have been time-limited. Covered in EI6.

Stage 3 (persistence) — access reviews for privileged role assignments would have caught the accumulated roles. A quarterly review asking the administrator's manager "does this person still need Exchange Administrator?" would have removed the unnecessary assignments before the resignation. Covered in EI12.

Stage 3 (persistence via application) — application registration monitoring (detection rules for "Add application" and "Add service principal credentials" by users who are not in an approved DevOps team) would have flagged the new application and the credential addition. Covered in EI13.

Stage 3 (persistence via application) — a documented offboarding procedure that includes reviewing all application registrations and service principal credentials owned by the departing employee would have caught the backdoor credentials during the notice period. Covered in EI12 (lifecycle workflows).

Key lesson: This incident did not involve external attackers, phishing, or technical exploits. It exploited privilege creep (accumulated standing permissions that were never reviewed) and weak offboarding procedures (no review of application credentials). The most sophisticated conditional access architecture in the world does not prevent an insider with legitimate administrative access from establishing persistent backdoor access through application credentials. Governance controls — PIM, access reviews, lifecycle workflows, and application monitoring — are the defense layer for this threat.

The common thread

All three case studies share a pattern: the attack succeeded not because the organization had no security controls, but because the security controls had specific gaps that the attacker (or insider) exploited.

The professional services firm had MFA — but not phishing-resistant MFA. The technology company had device compliance — but not consent governance. The financial services firm had comprehensive conditional access — but not privileged access management beyond Global Admin.

This is why the Defense Design Method starts with "what attack does this stop." A security control that does not map to a specific attack technique is a compliance checkbox. A security control that maps to a specific attack technique but has gaps in coverage is a false sense of security. The goal of this course is to help you deploy controls that are comprehensive, verified, and effective against the attack techniques that actually target production M365 environments.

Why these breaches are hard to detect without the right controls

Each case study was not just a failure of prevention — it was a failure of detection. The attacks persisted for days, weeks, or months because the evidence was present in the logs but nobody was looking at it in the right way.

In case study 1, the AiTM attack produced a clear indicator: two sign-ins for the same user from different IP addresses within minutes. This is one of the most reliable AiTM indicators. But without a Sentinel analytics rule monitoring for this specific pattern, the sign-in log entry was one of thousands of daily entries that nobody reviewed. The inbox rule creation was logged as a "New-InboxRule" operation in the audit log — again, visible but not monitored. The BEC email was sent from a legitimate internal account, so Defender for Office 365 had no reason to flag it as phishing.

// EI0.8 — Check for risky sign-ins in your environment
SigninLogs
| where TimeGenerated > ago(30d)
| where RiskLevelDuringSignIn in ("medium", "high")
| summarize RiskySignIns = count() by RiskLevelDuringSignIn,
    RiskEventTypes = tostring(RiskEventTypes_V2)
| order by RiskySignIns desc
// Shows real risk detections in your environment
// Every result is a potential attack that Identity Protection flagged
Expand for Deeper Context

In case study 2, the consent phishing produced an audit log entry for "Consent to application" with the application's name and the granted permissions. But the organization was not monitoring for consent events with sensitive permissions. The subsequent data access by the malicious application appeared in the OfficeActivity logs as application-context file reads — a log category that most organizations never review because it is high-volume and most application access is legitimate.

In case study 3, the insider's actions were individually indistinguishable from legitimate administrative work. Creating an application registration is something administrators do regularly. Adding a client secret to a service principal is routine maintenance. The only indicator was the correlation: an administrator who just submitted their resignation performing these actions during their notice period. That correlation requires combining HR data (resignation date) with audit log data (administrative actions) — a detection approach that most organizations have never implemented but that EI13 covers.

The detection engineering module (EI13) addresses all three patterns with specific KQL analytics rules. The monitoring module (EI14) establishes the operational cadence that ensures someone is actually reviewing the detection output daily. Prevention is always preferable to detection, but assume breach (EI0.7) means building the detection layer for when prevention fails.

Try it yourself

Try It — Assess Your Exposure to These Scenarios

Exercise: For each of the three case studies, assess whether your environment (or an environment you are responsible for) would be vulnerable to the same attack pattern:

1. AiTM to BEC: Is your MFA phishing-resistant (FIDO2/passkeys) or phishing-capable (push notifications/SMS)? Is token protection enabled? Is device compliance required? 2. Consent phishing: Can users consent to applications without admin approval? Do you monitor the audit log for new consent grants with sensitive permissions? 3. Insider privilege abuse: Are all privileged roles managed through PIM, or do some have standing assignments? Do you review application registrations during employee offboarding?

For each "no" answer, note the corresponding course module. These are your highest-priority learning targets.

⚠ Compliance Myth: "These attacks only happen to large enterprises"

The myth: Our organization is too small to be targeted. AiTM kits and consent phishing campaigns target Fortune 500 companies, not mid-size firms.

The reality: AiTM phishing kits like EvilProxy and Evilginx are available as services — attackers do not need to build their own infrastructure. They send phishing campaigns to thousands of email addresses simultaneously, and the targets are whoever clicks the link. The professional services firm in case study 1 had 400 users. The BEC loss was $47,000 — significant for a mid-size firm, but the same attack against a larger organization's accounts payable process could be orders of magnitude larger. Organization size determines the potential impact, not the likelihood of being targeted. The defensive controls are the same regardless of organization size.

Decision point

You are reviewing NE's Entra ID security posture. You find 4 accounts with Global Administrator role, but NE's policy says maximum 2. The extra 2 were added during the AiTM incident for emergency response and never removed. Do you remove them?

Remove them — but through the proper process, not unilaterally. Notify the account owners that their emergency GA assignment is being revoked, confirm they have their standard role assignments restored, and document the removal with the rationale ('emergency assignment during INC-NE-2026-0227-001, no longer required'). Then add a PIR action item: 'Implement PIM time-limited role assignments for future incident response — emergency GA assignments auto-expire after 8 hours rather than persisting indefinitely.' The stale emergency assignment is a governance failure, not a technical failure — the fix is procedural.

NE's Entra ID security audit reveals: 4 Global Administrators (policy says 2), 23 users with Global Reader from a completed project, a break-glass account with no monitoring rule, and 3 guest accounts with no expiry date. Which finding is the highest priority?
The 4 Global Administrators — 2 extra GAs doubles the attack surface.
The break-glass account with no monitoring rule. The 4 GAs and stale Global Readers are governance issues that should be remediated — but they are existing conditions, not active threats. The unmonitored break-glass account is a critical detection gap: if the break-glass account is compromised or misused, the SOC has no alert. A break-glass account is excluded from CA policies by design — it is the most powerful and least restricted account in the tenant. Without monitoring, its compromise or misuse is invisible. Deploy the monitoring rule (any sign-in to the break-glass account = Severity 1 alert) before addressing the other findings.
The 23 stale Global Readers — this is the largest number of affected accounts.
The 3 guest accounts — external accounts without expiry are the highest risk.

You've mapped the identity threat landscape and learned to read sign-in logs.

EI0 established that every cloud attack starts with identity. EI1 took you through the signal that matters most — interactive, non-interactive, service principal, and managed identity sign-ins. Now you engineer the defences.

  • 17 engineering modules — authentication methods, conditional access architecture, Identity Protection, PIM, token protection, application governance, and detection rules
  • The Defense Design Method — the six-step framework applied to every identity control you'll build
  • EI18 Capstone — Identity Security Architecture Design — design complete identity architectures for three realistic organisations (SMB, mid-market, regulated enterprise)
  • Identity Security Toolkit lab pack — deployable conditional access policies, PIM configurations, and Identity Protection risk rules
  • Cross-domain detection (EI16) — email-to-identity correlation and the full phishing-to-inbox-rule attack chain
Unlock the full course with Premium See Full Syllabus