In this module

MSA0.4 Threat-Informed Architecture

5 hours · Module 0 · Free
What you already know

You're familiar with security threats to M365 environments — phishing, credential theft, ransomware. You may have worked with MITRE ATT&CK technique IDs. This sub teaches you to use the threat landscape as a design input for architecture, not just a detection input for SOC operations — and grounds every threat pattern in the actual Entra sign-in logs, audit logs, and Graph API output you'd see during a real compromise.

Most M365 security designs start from the feature list — enable this, configure that, follow the Secure Score recommendation. Threat-informed architecture starts from the opposite direction: what do attackers actually do in M365 environments, and which architectural controls stop each attack pattern? This sub walks four prevalent attack patterns with the actual telemetry each one produces — sign-in logs, audit logs, risk detections — so you understand what the architecture needs to prevent and what it needs to detect.

Estimated time: 35 minutes.

THREAT-INFORMED ARCHITECTURE — FOUR PATTERNS + TELEMETRY AiTM PHISHING Reverse proxy steals session token post-MFA Log: SigninLogs Key: "Previously satisfied" Stop: Phishing-resistant MFA + device compliance MSA2 + MSA3 CONSENT PHISHING Malicious app gets OAuth permissions Log: AuditLogs Key: Consent to application Stop: Admin consent workflow + app governance MSA4 TOKEN THEFT Stolen token replayed from new device/location Log: SigninLogs Key: deviceId empty Stop: CAE + device binding + token lifetime policies MSA2 + MSA3 RANSOMWARE Credential → lateral → privilege → encrypt Log: Multi-source Key: XDR correlation Stop: PIM + tiered admin + detection architecture MSA4 + MSA8MSA11 PRIORITISATION: PREVALENT → PROVEN → POTENTIAL Architect first: attacks used against you or peers. Architect next: attacks proven elsewhere. Monitor: possible but not active. Each pattern below includes the actual sign-in log or audit log entry the attack produces, annotated field by field, so you know what you're architecting against. This course follows the same sequence. Identity first (prevalent). Protection second. Detection third.

Figure MSA0.4 — Four prevalent M365 attack patterns mapped to architectural controls and the telemetry that reveals them. Investment follows threat prevalence, not feature availability.

Design from the threat, not from the feature list

Most M365 security projects start in the wrong place. They start from the feature list — Secure Score recommendations, vendor documentation, conference talks about new capabilities. The result is a tenant where features are enabled because they're available, not because they address a specific threat. Push notification MFA is deployed because "MFA" is on every recommendation list. Sensitivity labels are published because Purview is in the license. Sentinel is connected because "SIEM" is expected. But none of these decisions were informed by what attackers actually do to M365 tenants.

Threat-informed architecture reverses the direction. You start with the attacks — the specific techniques that are actively used against M365 environments right now — and work backward to the controls that stop them. This is the approach the MCRA recommends: prioritize controls that increase the cost and friction for prevalent attack techniques, not controls that check boxes on a feature adoption dashboard.

The MCRA organises threats into three tiers. Prevalent threats are attacks actively used against you or your industry peers — architect against these first. Proven threats are attacks that work against other organizations but haven't targeted you yet — architect against these next. Potential threats are theoretically possible but not actively observed in the wild — monitor these, but don't over-invest before the prevalent threats are addressed.

For most M365 environments in 2025–2026, the four prevalent attack patterns are AiTM phishing, consent phishing, token theft, and human-operated ransomware. This sub walks each one with the actual telemetry it produces — the sign-in log entry, the audit log event, the risk detection — so you understand exactly what you're designing against. The artifacts aren't illustrations. They're the evidence you'll reference when you write the ADRs in MSA1 onward.

Pattern 1 — AiTM phishing: what the attacker's session looks like

The attacker operates a reverse proxy (EvilGinx, Muraena, or a custom toolkit) between the user and Microsoft's sign-in portal. The user navigates to what looks like login.microsoftonline.com but is actually the attacker's domain. The proxy forwards everything to the real Microsoft endpoint — including the MFA challenge. The user completes authentication normally, including approving the push notification or entering the TOTP code. The proxy captures the session cookie that Microsoft issues after successful authentication, then replays it from the attacker's infrastructure.

Here's what the attacker's replayed session looks like in the Entra sign-in log. Every field is from the Graph v1.0 signIn resource:

{
  "id": "a4c21e9f-7b03-4d8a-b5f2-6e3d1c8a9b07",
  "createdDateTime": "2026-04-10T14:22:18Z",
  "userDisplayName": "SOC Analyst",
  "userPrincipalName": "p.sharma@yourtenant.onmicrosoft.com",
  "userId": "b5d9e3a2-c1f4-4b87-9a63-7e2d8f1c3b56",
  "appDisplayName": "Microsoft Office 365 Portal",
  "ipAddress": "203.0.113.88",
  "clientAppUsed": "Browser",
  "correlationId": "f12a3b4c-5d6e-7f89-0a1b-2c3d4e5f6789",
  "conditionalAccessStatus": "success",
  "isInteractive": false,
  "riskDetail": "none",
  "riskLevelAggregated": "high",
  "riskLevelDuringSignIn": "high",
  "riskState": "atRisk",
  "riskEventTypes_v2": ["unfamiliarFeatures", "anomalousToken"],
  "resourceDisplayName": "Microsoft Office 365",
  "status": { "errorCode": 0 },
  "deviceDetail": {
    "deviceId": "",
    "displayName": null,
    "operatingSystem": "Windows 10",
    "browser": "Chrome 124.0",
    "isCompliant": null,
    "isManaged": null,
    "trustType": ""
  },
  "location": {
    "city": "Lagos",
    "state": "Lagos",
    "countryOrRegion": "NG"
  },
  "appliedConditionalAccessPolicies": [
    {
      "displayName": "Require MFA - All Users",
      "result": "success",
      "enforcedGrantControls": ["Mfa"]
    }
  ],
  "authenticationDetails": [
    {
      "authenticationMethod": "Previously satisfied",
      "authenticationMethodDetail": "Previously satisfied",
      "succeeded": true,
      "authenticationStepResultDetail": "MFA requirement satisfied by claim in the token"
    }
  ]
}

Read the fields that expose this as an attack.

authenticationDetails[0].authenticationMethod: "Previously satisfied" with authenticationStepResultDetail: "MFA requirement satisfied by claim in the token" — the attacker didn't authenticate. They replayed a token that already contained the MFA claim. The original user completed MFA on the proxy. The proxy captured the resulting session cookie. The attacker's request arrives with a pre-satisfied MFA claim.

ipAddress: "203.0.113.88" in Lagos, Nigeria — SOC Analyst works in Manchester. location.countryOrRegion: "NG" confirms the geolocation mismatch. But this alone isn't definitive — Priya could be traveling.

deviceDetail.deviceId: "" — this is the critical architectural indicator. No enrolled device. If the CA policy required compliantDevice in addition to MFA, this sign-in would have been blocked. The attacker's machine isn't enrolled in Entra ID. The device compliance signal is absent.

riskLevelDuringSignIn: "high" and riskEventTypes_v2: ["unfamiliarFeatures", "anomalousToken"] — Identity Protection flagged this as high-risk and identified two risk event types. anomalousToken is the specific risk detection for token replay. But look at appliedConditionalAccessPolicies: the only policy that evaluated this sign-in was "Require MFA — All Users," and it succeeded — because the token had the MFA claim. No risk-based CA policy exists to act on the high risk signal.

isInteractive: false — this is a non-interactive sign-in. The attacker is using the stolen cookie to access resources, not going through an interactive login flow. This means the sign-in appears in the non-interactive sign-in logs, which many SOC teams don't monitor with the same attention as interactive sign-ins.

The architectural controls that would have stopped this:

# Check: phishing-resistant MFA in any CA policy?
Get-MgIdentityConditionalAccessPolicy |
  Where-Object { $_.GrantControls.AuthenticationStrength -ne $null } |
  Select-Object DisplayName,
    @{N='AuthStrength';E={$_.GrantControls.AuthenticationStrength.DisplayName}}

# Check: device compliance required?
Get-MgIdentityConditionalAccessPolicy |
  Where-Object { $_.GrantControls.BuiltInControls -contains "compliantDevice" } |
  Select-Object DisplayName

# Check: risk-based CA policies exist?
Get-MgIdentityConditionalAccessPolicy |
  Where-Object { $_.Conditions.SignInRiskLevels.Count -gt 0 } |
  Select-Object DisplayName,
    @{N='RiskLevels';E={$_.Conditions.SignInRiskLevels -join ", "}}

If all three queries return results, the architecture has three layers of defense against AiTM: phishing-resistant MFA prevents the proxy from capturing a usable session (the authentication is cryptographically bound to the legitimate server's TLS session), device compliance blocks tokens from unenrolled devices, and risk-based CA blocks sign-ins that Identity Protection flags as anomalous. In most tenants, queries 1 and 3 return nothing.

The attacker tricks a user into granting OAuth permissions to a malicious application. The user receives an email with a link that presents what looks like a legitimate Microsoft consent prompt. They click "Accept." The application now has persistent API access to the user's mailbox, files, or both — without needing their credentials and without triggering any MFA challenge.

Here's the consent event in the Entra audit log:

{
  "activityDisplayName": "Consent to application",
  "activityDateTime": "2026-04-11T11:04:33Z",
  "category": "ApplicationManagement",
  "result": "success",
  "initiatedBy": {
    "user": {
      "userPrincipalName": "t.ashworth@yourtenant.onmicrosoft.com",
      "displayName": "SOC Analyst",
      "id": "c2e4f6a8-b1d3-4c5e-a7f9-0b2d4e6f8a10"
    }
  },
  "targetResources": [
    {
      "displayName": "Document Reviewer Pro",
      "type": "ServicePrincipal",
      "id": "d3f5a7b9-c2e4-4d6f-b8a0-1c3e5f7a9b21",
      "modifiedProperties": [
        {
          "displayName": "ConsentAction.Permissions",
          "oldValue": "[]",
          "newValue": "[{\"ResourceId\":\"00000003-0000-0000-c000-000000000000\",\"Scope\":\"Mail.Read Mail.Send Files.ReadWrite.All User.Read offline_access\"}]"
        }
      ]
    }
  ]
}

SOC Analyst consented to an application called "Document Reviewer Pro." The modifiedProperties array shows exactly what permissions were granted. Parse the Scope string: Mail.Read (read all of their email), Mail.Send (send email as the user), Files.ReadWrite.All (read and modify every file the user has access to in OneDrive and SharePoint), User.Read (read Tom's profile), and offline_access (the application retains access even when the user isn't signed in — this is the persistence mechanism). The ResourceId is 00000003-0000-0000-c000-000000000000, which is the Microsoft Graph API.

No credential was stolen. No MFA was bypassed. Tom granted the access himself, believing it was a legitimate productivity tool. The application now has a refresh token that persists until explicitly revoked — which could be weeks or months if nobody notices.

Check whether your tenant restricts user consent:

$authPolicy = Invoke-MgGraphRequest -Method GET `
  -Uri "https://graph.microsoft.com/v1.0/policies/authorizationPolicy"
$authPolicy.defaultUserRolePermissions.permissionGrantPoliciesAssigned
ManagePermissionGrantsForSelf.microsoft-user-default-legacy

That value — microsoft-user-default-legacy — is the most permissive consent policy Microsoft offers. Users can consent to any application requesting permissions classified as "low impact." The problem: Microsoft classifies Mail.Read and User.Read as low impact. An application that can read all of their email is granted by user consent without admin review. The offline_access scope is also classified as low impact — persistent access is granted silently.

An architected tenant restricts this. The options, in order of restrictiveness:

microsoft-user-default-legacy    → Users consent to any "low impact" permission (default, most permissive)
microsoft-user-default-low       → Users consent to apps from verified publishers only
microsoft-application-admin      → Users cannot consent — all requests routed to admin workflow

The architectural decision — documented in an ADR — is which level of restriction balances security against user productivity. Most organizations should be on microsoft-user-default-low at minimum, with an admin consent workflow for applications requesting permissions beyond the verified publisher scope. MSA4 covers this in depth.

Pattern 3 — Token theft and replay

The attacker steals a session token from a browser cookie (via infostealer malware), a memory dump, or a compromised browser extension. They replay the token from their own infrastructure. The session is already authenticated — no credential or MFA challenge occurs.

Token theft is harder to detect than AiTM because fewer fields are anomalous. If the attacker uses a residential proxy in the same country, the IP geolocation looks normal. The key indicator is the device signal:

{
  "createdDateTime": "2026-04-12T16:45:22Z",
  "userPrincipalName": "r.okafor@yourtenant.onmicrosoft.com",
  "conditionalAccessStatus": "success",
  "isInteractive": false,
  "riskLevelDuringSignIn": "none",
  "riskState": "none",
  "deviceDetail": {
    "deviceId": "",
    "displayName": null,
    "operatingSystem": "Windows 10",
    "browser": "Chrome 124.0",
    "isCompliant": null,
    "isManaged": null,
    "trustType": ""
  },
  "location": {
    "city": "London",
    "countryOrRegion": "GB"
  },
  "authenticationDetails": [
    {
      "authenticationMethod": "Previously satisfied",
      "authenticationStepResultDetail": "MFA requirement satisfied by claim in the token"
    }
  ]
}

"Previously satisfied" — token replay, same as AiTM. But riskLevelDuringSignIn: "none" — Identity Protection didn't flag this because the IP is in London, where CISO works. deviceDetail.deviceId: "" — the device isn't the enrolled laptop that originally authenticated. This is the signal: a sign-in with a pre-satisfied MFA claim from an unregistered device. If CA requires device compliance, this is blocked. If it only requires MFA, it passes.

The architectural controls for token theft focus on limiting what a stolen token can do and how long it remains valid. Continuous Access Evaluation (CAE) is the primary mechanism — it forces real-time policy evaluation instead of relying on token expiry:

$caePolicy = Invoke-MgGraphRequest -Method GET `
  -Uri "https://graph.microsoft.com/v1.0/identity/continuousAccessEvaluationPolicy"
$caePolicy | ConvertTo-Json
{
  "displayName": "Continuous access evaluation policy",
  "description": "Tenant-wide policy that controls whether CAE is enabled",
  "isEnabled": true,
  "migrate": false,
  "groups": []
}

isEnabled: true — CAE is active at the tenant level. When CAE is enabled, supported applications (Exchange Online, SharePoint Online, Teams, and Microsoft Graph) perform near-real-time policy evaluation. If a user's account is disabled, their password is changed, or their risk level increases, active sessions are terminated within minutes rather than waiting for the access token to expire (which could be 60–90 minutes by default).

The groups: [] field determines scope. An empty array means CAE applies to all users. If specific groups are targeted and the compromised user isn't in scope, CAE doesn't apply to their sessions. Architecture ensures this is empty (all users) or explicitly documents which populations are excluded and why.

CAE doesn't prevent token theft. It limits the blast radius by reducing the time a stolen token remains usable. Combined with device compliance (which prevents the stolen token from being used on an unenrolled device) and phishing-resistant MFA (which prevents the token from being stolen via AiTM in the first place), the three controls form a layered defense against the token theft kill chain.

Pattern 4 — Human-operated ransomware

Ransomware isn't one event — it's the end of a chain. The kill chain in M365 environments follows a consistent pattern documented in MCRA and M-Trends: initial access (credential theft or phishing) → privilege escalation (move from standard user to admin) → lateral movement (spread across the environment) → backup destruction → encryption → extortion. Each stage produces telemetry in different M365 logs.

The architectural response isn't one control — it's layered controls at each stage. Map the chain to the course modules:

Stage 1: Initial access (AiTM phishing, password spray)
  → Phishing-resistant MFA, risk-based CA
  → MSA2, MSA3

Stage 2: Privilege escalation (compromise admin account, exploit standing privileges)
  → PIM (eligible not permanent), tiered admin, PAW strategy
  → MSA4

Stage 3: Lateral movement (move from endpoint to file server, DC)
  → Device compliance, network segmentation, MDE detection
  → MSA6, MSA10

Stage 4: Backup destruction (delete backup configs, disable recovery)
  → Backup integrity controls, immutable storage
  → MSA10 (detection architecture)

Stage 5: Encryption + extortion
  → Detection and response — this is where XDR correlation catches it
  → MSA10, MSA11

Stage 6: Post-incident
  → Incident response architecture, playbooks, containment procedures
  → MSA10

The critical insight: most organizations' detection fires at stage 5 (encryption). By then, credentials were stolen hours ago, the attacker has been on the file server for the duration, data has been exfiltrated, and backups may be compromised. The architectural value of this course is building controls that stop or detect stages 1–3, where containment is still meaningful.

Check whether PIM is configured — the control that prevents standing admin privileges from being the entry point for stage 2:

$pimPolicies = Get-MgPolicyRoleManagementPolicy `
  -Filter "scopeId eq '/' and scopeType eq 'DirectoryRole'" -Top 5
$pimPolicies | Select-Object DisplayName | Format-List

If that returns policies, PIM is configured for directory roles. If it returns nothing or errors with Request_ResourceNotFound, PIM isn't deployed. Standing admin privileges exist — Global Admin, Exchange Admin, SharePoint Admin assigned permanently, available 24/7 whether the admin is working or not. An attacker who compromises a permanent GA identity has immediate, persistent, unrestricted access to everything.

Mapping threats to architectural investment

The MCRA's prioritization principle is practical: invest in controls that increase the attacker's cost for prevalent techniques. For most M365 environments, the investment sequence is:

Phase 1 (MSA1MSA4) stops the entry. Phishing-resistant MFA blocks AiTM. Device compliance blocks token replay from unmanaged devices. Admin consent workflow blocks consent phishing. PIM limits blast radius by eliminating standing privilege.

Phase 2 (MSA5MSA7) stops the objective. DLP across all channels blocks data exfiltration. Sensitivity labels ensure DLP has content to protect. Email security reduces the phishing attack surface.

Phase 3 (MSA8MSA11) catches what prevention misses. Sentinel analytics detect the early indicators — unusual sign-in patterns, inbox rule creation, MFA method registration changes. Defender XDR correlates signals across the kill chain. Incident response architecture ensures containment happens faster than the attacker completes their objectives.

Phase 4 (MSA12MSA14) prevents decay. Access reviews remove stale permissions the attacker would exploit. Lifecycle workflows ensure leavers don't retain access. Compliance mapping demonstrates the architecture addresses regulatory requirements.

This course follows the same sequence — not because it's tidy, but because it matches the threat prioritization. Identity architecture first because identity compromise is the most prevalent initial access method. Protection second because data loss is the most common attacker objective. Detection third because detection catches what prevention can't stop. Governance last because governance prevents the architecture from degrading over time.

What architecture can't prevent

Threat-informed architecture is honest about its limits. A legitimate user with legitimate access who decides to steal data is an insider threat that no access control architecture prevents — they already have access. DLP and Insider Risk Management detect and constrain the behavior, but they don't prevent the intent. A supply chain compromise enters through a legitimate trust relationship — the vendor's application is already trusted, and the initial access appears normal. A zero-day in a Microsoft service bypasses every control until Microsoft patches it.

Documenting these limits in your ADRs — via the residual risk field — is part of the architecture. The architecture stops what it can prevent, detects what it can't prevent, and documents the rest honestly. A tenant that acknowledges its gaps and monitors them is more secure than a tenant that assumes its controls are complete.

Before moving on, verify your understanding: Look at the AiTM sign-in log entry. Name the specific field and value that reveals the token was replayed rather than freshly authenticated. Explain why a CA policy that only requires builtInControls: ["mfa"] doesn't stop this attack, referencing how the authenticationStepResultDetail value is generated. Run the consent policy query against your tenant. If the result shows microsoft-user-default-legacy, list three specific OAuth scopes that a user could grant to a malicious application without admin approval, and explain the risk each creates.


Reusable script — the commands from this sub assembled for operational use:

Run these five queries against your tenant. Each tests whether the architectural control for one prevalent attack pattern is active.

# 1. AiTM defense — phishing-resistant MFA in any CA policy?
(Get-MgIdentityConditionalAccessPolicy |
  Where-Object { $_.GrantControls.AuthenticationStrength -ne $null }).Count
# 0 = gap. No policy uses authentication strength.

# 2. AiTM/token theft defense — device compliance in any CA policy?
(Get-MgIdentityConditionalAccessPolicy |
  Where-Object { $_.GrantControls.BuiltInControls -contains "compliantDevice" }).Count
# 0 = gap. No policy requires compliant devices.

# 3. Consent phishing defense — user consent restricted?
(Invoke-MgGraphRequest -Method GET `
  -Uri "https://graph.microsoft.com/v1.0/policies/authorizationPolicy"
).defaultUserRolePermissions.permissionGrantPoliciesAssigned
# "microsoft-user-default-legacy" = gap. Users can consent to low-impact scopes.

# 4. Token theft defense — CAE enabled?
(Invoke-MgGraphRequest -Method GET `
  -Uri "https://graph.microsoft.com/v1.0/identity/continuousAccessEvaluationPolicy"
).isEnabled
# false = gap. Stolen tokens remain valid until natural expiry.

# 5. Ransomware defense — PIM configured for directory roles?
(Get-MgPolicyRoleManagementPolicy `
  -Filter "scopeId eq '/' and scopeType eq 'DirectoryRole'" -Top 1 -ErrorAction SilentlyContinue).Count
# 0 or error = gap. Standing admin privileges exist.

For each query: if the result indicates a gap, map it: threat pattern → missing control → course module that teaches the architecture → priority (prevalent/proven/potential). This map is the starting input for your NE architecture package in MSA0.5.

Next

MSA0.5 — The your environment Scenario. You've learned architecture thinking, mapped the stack, practised ADR documentation, and applied threat-informed prioritization. MSA0.5 documents your tenant baseline — the organization whose M365 security architecture you design, implement, and validate across the rest of this course. Current state: messy, real, and full of the gaps you just learned to diagnose.

You're reading the free modules of m365-security-architecture

The full course continues with advanced topics, production detection rules, worked investigation scenarios, and deployable artifacts.

View Pricing See Full Syllabus