Six scenario questions. Each presents a situation and asks what you'd do or what you'd conclude. Pick your answer before reading the explanation.
Scenario 1. You inherit an M365 tenant from a predecessor who has left the organization. The tenant has 18 Conditional Access policies, all enabled. You query the policies with PowerShell and see that every policy uses the basic MFA built-in control with no authentication strength policy referenced. Your CISO asks whether the tenant is protected against AiTM phishing. What is the most accurate response?
Yes — 18 CA policies requiring MFA means the tenant has comprehensive identity protection a
MFA is enforced, but without an authentication strength policy, any MFA method satisfies the requirement — including push notification MFA, which does not prevent AiTM phishing. The attacker's reverse proxy captures the session token after MFA completion. The number of policies doesn't change this — all 18 have the same gap.
No — without an authentication strength policy, any MFA method satisfies the policies, including push notification MFA, which AiTM phishing bypasses. The policies enforce authentication but not phishing-resistant authentication. b
Correct. The distinction between requiring basic MFA and requiring phishing-resistant MFA via an authentication strength policy is the difference between stopping AiTM and not stopping it. The tenant has configuration (MFA enforced) without architecture (no documented decision about which MFA methods are acceptable for which risk levels).
Partially — push MFA is weaker than FIDO2 but still provides meaningful protection against most phishing attempts c
Push MFA protects against basic credential stuffing and password spray (the attacker doesn't have the second factor). But AiTM is not basic phishing — the proxy captures the session token after the user approves the push. "Meaningful protection against most phishing" is accurate for traditional phishing but inaccurate for AiTM, which is the prevalent identity attack pattern for M365.
You need to check the Identity Protection risk policies before answering — the CA policies alone don't tell the full story d
Risk-based CA policies would help detect and block suspicious sign-ins, but they don't prevent AiTM directly. The AiTM attack produces a legitimate session token with a valid MFA claim. Even if Identity Protection flags it after the fact, the initial session is established. Phishing-resistant MFA is the architectural control that prevents it at the authentication step.
Scenario 2. You query the Intune device compliance settings and find that the secure-by-default setting is disabled and the compliance check-in threshold is set to 30 days. A CA policy requires device compliance for Exchange and SharePoint access. The CISO asks whether unmanaged personal devices can access corporate email. What do you report?
No — the CA policy blocks non-compliant devices from Exchange and SharePoint a
The policy exists, but with secure-by-default disabled, devices with no compliance policy assigned are treated as compliant. If a personal device has no Intune compliance policy (because it's not enrolled), it passes the device compliance check by default. The CA policy looks like it's enforcing device trust. It isn't.
Only if the personal device is enrolled in Intune — unenrolled devices are automatically blocked b
This would be correct if secure-by-default were enabled. With it disabled, the system treats the absence of compliance evaluation as compliance. Unenrolled devices don't fail the check — they bypass it.
Yes — with secure-by-default disabled, devices without a compliance policy are treated as compliant. The CA policy's device compliance control is enforcing a signal that may be empty for unmanaged devices. Enable secure-by-default and reduce the check-in threshold. c
Correct. This is a signal-chain gap. The CA policy consumes the compliance signal from Intune, but the Intune default treats "no evaluation" as "compliant." The signal chain looks connected but is meaningless for unmanaged devices. Architecture fixes this by enabling secure-by-default (unevaluated devices are non-compliant) and reducing the check-in threshold to 7–14 days.
You can't determine this from the compliance settings alone — you need to check the specific CA policy conditions d
The CA policy conditions matter, but the question is about the upstream signal. Even a perfectly scoped CA policy can't enforce device trust if the compliance signal defaults to "compliant" for unmanaged devices. The secure-by-default setting is the root cause, not the CA policy scope.
Scenario 3. You're writing an ADR for the authentication method selection. The Alternatives Considered field currently reads: "We considered FIDO2 hardware keys and certificate-based authentication but decided passkeys were better." Your colleague reviews the ADR and says it's incomplete. What's missing?
The field needs to include the cost of each alternative a
Cost is relevant context, but the core issue is that the rejection reasons are absent. "Decided passkeys were better" is a conclusion, not a rationale. Each alternative needs a specific reason it was rejected — tied to a constraint, not a preference.
Each alternative needs a specific rejection reason tied to a specific constraint — not "passkeys were better" but why FIDO2 was rejected (procurement cost and timeline) and why CBA was rejected (no PKI infrastructure). The alternatives must be defensible, not just listed. b
Correct. "Decided passkeys were better" is undocumented preference. "FIDO2 rejected — procurement cost of $30,000–50,000, 12-week lead time, threat model requires immediate coverage" is a defensible rejection reason tied to the organization's specific constraints. The alternatives field exists so a future reader understands why the obvious options weren't chosen — and each rejection must reference a specific constraint, not a general opinion.
The field needs to include at least four alternatives, not just two c
The number of alternatives matters less than the quality of the rejection reasons. Two alternatives with specific, constraint-based rejection reasons are more valuable than four alternatives with vague dismissals. Though in practice, the MSA0.3 worked example included four alternatives — because the authentication decision genuinely has four viable options.
The field is fine — "better" is a valid comparative judgment from the architect d
"Better" is an opinion. An ADR that says "passkeys are better than FIDO2" gives the successor no way to evaluate whether the decision still holds if constraints change. If the procurement budget is approved next quarter, is FIDO2 now the right choice? Without knowing why FIDO2 was rejected, the successor can't answer. That's the value of specific rejection reasons.
Scenario 4. You review an Entra ID sign-in log entry and see that the authentication step result says "MFA requirement satisfied by claim in the token" with an empty device ID field. The sign-in is from a country where the user doesn't normally work. What does this combination of fields suggest?
The token was likely stolen and replayed. "Satisfied by claim in the token" means the session was pre-authenticated — no interactive MFA occurred. The empty device ID means the request came from an unregistered device. Combined with the anomalous location, this is consistent with AiTM token theft or cookie replay. a
Correct. Three signals converge: pre-satisfied MFA claim (no fresh authentication), no enrolled device (the attacker's machine), and anomalous geolocation. Any one of these could be benign — a user on a new device while traveling. All three together are high-confidence indicators of token theft. The architectural controls that prevent this: phishing-resistant MFA (blocks the initial theft), device compliance (blocks replay from unregistered devices), and risk-based CA (blocks sign-ins flagged as anomalous).
The user is traveling and using a personal device — this is normal for mobile workers b
Possible, but "MFA satisfied by claim in the token" means the user didn't authenticate interactively. If they were traveling and signing in on a new device, they'd go through the authentication flow — and the log would show the specific MFA method they used, not "Previously satisfied." The pre-satisfied claim indicates a replayed token, not a fresh sign-in.
The empty device ID just means the device isn't enrolled in Intune — it doesn't indicate compromise c
An empty device ID alone is common and not necessarily suspicious — many users have unregistered devices. But combined with a pre-satisfied MFA claim and anomalous location, it becomes a strong indicator. Architecture addresses this by requiring device compliance, which blocks any sign-in from an unregistered device regardless of the MFA claim.
You can't determine anything from these fields — you need to check the full incident in Defender XDR d
Investigation in Defender XDR is the right next step, but the sign-in log fields are already diagnostic. The combination of pre-satisfied MFA, empty device ID, and anomalous location is a well-documented pattern for token theft. The architectural question is whether controls existed to prevent this sign-in from succeeding — and the answer from these fields is no.
Scenario 5. Your tenant's Secure Score shows 0 out of 9 points for both the sign-in risk policy and user risk policy recommendations. the CISO asks what these scores mean for the architecture. How do you explain it?
The scores are low because the tenant hasn't had any risky sign-ins — the controls aren't needed yet a
The 0/9 score means the policies don't exist, not that they've evaluated and found no risk. Identity Protection may be generating risk signals (it runs automatically on E5 tenants), but without a CA policy that conditions on sign-in risk or user risk, those signals aren't acted on. The controls aren't "not needed" — they're absent.
Enable both policies immediately at their default settings to bring the scores to 9/9 b
Enabling risk-based policies is the right architectural direction, but "immediately at default settings" skips the design-justify-implement-validate cycle. Which risk levels trigger which actions? Does "high risk" block or require password change? What about medium risk? What exceptions exist for service accounts? These are architecture decisions that need ADRs, not dashboard optimisations.
Identity Protection generates risk signals for E5 users, but no CA policy acts on them. The signal chain between Identity Protection and Conditional Access is broken. Both policies need to be designed — which risk levels, which actions, which exceptions — and documented as ADRs in MSA3. c
Correct. This is a signal-chain gap, not a missing feature. Identity Protection is active (the risk signals exist). The gap is that no CA policy consumes the signals. The fix isn't "enable the policy" — it's "design the risk-based CA architecture, document the decision, implement it, validate it." That's MSA3.
The recommendations are only relevant for E5 users — they don't apply to the 560 E3 users d
Identity Protection risk signals only generate for E5-licensed users, so the risk-based CA policies can only evaluate E5 sign-ins. This is a licensing limitation documented in the risk register — but it doesn't mean the policies aren't worth implementing. Protecting 250 E5 users (including all administrators and finance staff) is still architecturally significant. The E3 gap gets a residual risk entry and a compensating control.
Scenario 6. You're configuring the lab tenant. You create the persona security groups and add the IT Director to the MFA exclusion group. A colleague asks why you're deliberately introducing a security gap into the lab. How do you explain it?
It's a mistake — The IT Director should never be excluded from MFA, even in a lab a
In production, Phil being excluded from MFA is an architectural failure. In the lab, it's the realistic starting state that the course teaches you to remediate. a real environment would have this gap — the lab mirrors it so you practise the diagnosis, documentation, and remediation in a safe environment.
The lab mirrors a realistic starting state — gaps included. The course teaches you to diagnose, document, and remediate these gaps. Starting with a clean tenant would skip the real-world problem the architecture is designed to fix. b
Correct. A clean lab with perfect configuration teaches you to maintain an ideal state. A messy lab with realistic gaps teaches you to improve a real one — which is the skill you'll actually use. Every remediation you perform in the lab (removing the IT Director from the exclusion, configuring PIM, designing authentication strength policies) mirrors what you'll do in your production environment.
The exclusion group is needed for testing — some lab exercises require sign-ins without MFA c
Some exercises do require testing policy evaluation without MFA interference, but that's not the reason the IT Director is in the group. The reason is that a real environment has an IT Director excluded from MFA with no documentation. The lab replicates the gap so you practise fixing it through architecture, not by removing the group in advance.
It doesn't matter — it's a lab tenant with no real data d
The lab tenant doesn't contain real sensitive data, but the lab exercises produce real skills. Dismissing the lab state as unimportant means you'll skip the diagnostic and remediation exercises — the exercises that build the skills you'll use in production. The lab's value comes from its realism, not its data.
💬
How was this module?
Your feedback helps us improve the course. One click is enough — comments are optional.