Module 15 — Check My Knowledge (20 questions)
1. How does consent phishing differ from credential phishing?
Credential phishing steals the user's password or session token. Consent phishing tricks the user into granting a malicious application permission to access their data. The application then uses its own credentials — the user's password is never stolen. Consent phishing survives password reset, token revocation, and MFA because the application authenticates independently of the user.
They are the same attack
Consent phishing requires malware
Consent phishing only targets administrators
Credential phishing = steal password/token. Consent phishing = grant application permission. Different persistence, different containment.
2. A malicious app has Mail.ReadWrite consent. You reset the user's password. Can the app still read email?
Yes. The application authenticates with its own client credentials plus the consent grant — not the user's password. Password reset is irrelevant. Revoke the consent in Entra ID → Enterprise Applications.
No — password reset revokes all access
Only if MFA is disabled
Only for 90 minutes
OAuth consent is independent of user password. Revoke the consent specifically.
3. Where do malicious application API calls appear in the logs?
AADServicePrincipalSignInLogs — not the user's SigninLogs. The application authenticates as a service principal. An analyst checking only the user's sign-in logs sees nothing.
SigninLogs under the user's UPN
Application calls are not logged
CloudAppEvents only
AADServicePrincipalSignInLogs. Not in the user's SigninLogs.
4. 12 users consented to the same app in 4 hours. What does this indicate?
A phishing campaign. Legitimate app adoption does not produce clustered consent events. All 12 users received the same phishing email and clicked the consent link.
Normal adoption
IT deployed a new tool
A single user consented 12 times
Phishing campaign. Clustered consent = coordinated social engineering.
5. The app publisher is unverified. The app requests Mail.ReadWrite and Files.ReadWrite.All. The app was registered 3 days ago. Is this suspicious?
Highly suspicious. Three red flags: unverified publisher, high-risk permissions, and recent registration. Legitimate business applications from established publishers are verified and have been registered for months or years. This pattern — unverified + high-risk + new — is the classic consent phishing fingerprint.
Not necessarily — new apps are often unverified
Only if the app name is suspicious
Unverified publishers are always malicious
Three red flags combined. Unverified + high-risk permissions + recently registered = consent phishing pattern.
6. What is the most effective preventive control against consent phishing?
Disable user consent — require admin approval for all application consent requests. This eliminates the attack vector entirely: users cannot consent to malicious applications because they cannot consent to any application without admin review. Implement with an admin consent request workflow and a 24-hour review SLA.
Better user training
Stronger MFA
Email filtering
Disable user consent. Admin approval workflow. Eliminates the attack vector entirely.
7. After deleting the malicious application, AADServicePrincipalSignInLogs shows failed sign-in attempts from the same AppId. What does this mean?
The attacker's infrastructure is still trying to use the application's credentials. The attempts fail because the application was deleted — revocation worked. Log the attacker's IPs as IOCs. No further action needed on these failures.
The revocation failed
The application was re-registered
A different application is in use
Revocation working. Attacker trying but blocked. Log IPs as IOCs.
8. The tenant-wide audit finds 47 consented apps with Mail.Read or higher. 12 are from unverified publishers. What do you do?
Review each of the 12 unverified-publisher apps individually: verify purpose, check registration date, confirm with business users. Remove dormant and unrecognised apps. Verify the 35 verified-publisher apps are still needed and appropriately permissioned. This is the quarterly OAuth hygiene process.
Remove all 47
Unverified = malicious, remove 12
Only review apps from this month
Individual review. Remove dormant and unrecognised. Quarterly hygiene process.
9. Rule 3 (Bulk Consent) fires: 5 users consented to "Office Document Viewer" in 2 hours. The app is from a verified publisher. Is this a false positive?
Possibly, but investigate. Verified publisher reduces the risk — but does not eliminate it. Check: did IT announce this application? Is the publisher a known company? What permissions does it request? 5 consents in 2 hours could be a team adopting a legitimate tool — or a phishing campaign using a verified publisher's compromised application. Investigate before dismissing.
Yes — verified publisher means safe
Always block bulk consent
Disable the rule — too many false positives
Investigate. Verified publisher reduces risk but does not eliminate it. Check context before dismissing.
10. Rule 5 fires: an app accesses Exchange Online 8 minutes after consent. What does the timing indicate?
Automated exfiltration. A legitimate application is typically configured and used hours or days after consent. An application that accesses Exchange within minutes of consent is running an automated script — the attacker's infrastructure starts harvesting data as soon as the consent grants access. This is high-confidence consent phishing with active data exfiltration.
Normal application behaviour
The user tested the app immediately
Timing is not relevant
Automated exfiltration. Immediate access after consent = scripted data harvesting. High confidence malicious.
11. Delegated vs application permissions — which can a regular user consent to?
Delegated permissions only (unless admin consent is required by policy). Application permissions always require admin consent. This is why consent phishing targets delegated permissions — the user can grant them without admin approval. Disabling user consent (Recommendation 1) prevents this.
Both types
Application permissions only
Neither — all consent requires admin
Delegated permissions. Users can grant these without admin approval (by default). This is the consent phishing attack vector.
12. How does consent phishing connect to Modules 11 and 13?
Module 12 (AiTM) may include OAuth consent as a persistence mechanism — the attacker grants consent to a malicious app during the AiTM compromise window. Module 14 (Token Replay) identified OAuth application persistence as the likely explanation for access 20 days after containment. Module 15 teaches the investigation and prevention of the OAuth consent itself. The three modules cover different perspectives of the same persistence technique.
They are unrelated
M14 replaces M11 and M13
Only M13 is related
Connected persistence technique. M11 creates the access, M13 discovers the persistence, M14 investigates and prevents the consent mechanism itself.
13. What artifacts should you have after completing this module?
Four artifacts: (1) OAuth investigation playbook — consent identification, permission analysis, data access assessment, revocation. (2) 5 consent detection rules — deployable KQL. (3) Consent review checklist — permission risk matrix, publisher verification, risk scoring. (4) Application governance deployment guide — admin consent workflow, permission policies, app governance.
A certificate
Study notes
A list of queries
4 deployable artifacts: playbook, detection rules, consent checklist, governance guide.
14. The total detection rule count across Modules 11-14 is:
24 rules. M11: 8 AiTM rules. M12: 6 BEC rules. M13: 5 token replay rules. M14: 5 consent phishing rules. Together they cover the complete M365 attack chain from credential theft through token abuse, financial fraud, and persistent API access.
8 rules
15 rules
30 rules
24 rules: 8 + 6 + 5 + 5. Complete kill chain detection coverage.
15. You set user consent to "verified publishers only." A consent phishing campaign uses a compromised verified publisher's application registration. Does your control prevent it?
No. If the attacker compromised a verified publisher's Azure AD registration and added malicious redirect URIs, the application appears verified. The "verified publishers only" control does not catch this. Full admin consent (Recommendation 1) would catch it because an admin would review the application's requested permissions and redirect URIs. Defence in depth: verified publisher restriction + admin consent workflow + app governance monitoring.
Yes — verified publisher is always safe
Verified publishers cannot be compromised
This scenario is impossible
Compromised verified publisher bypasses publisher-only restriction. Full admin consent + monitoring provides defence in depth.
16. What is the recommended frequency for the tenant-wide consent audit?
Quarterly. Run the three audit queries from subsection 15.6: inventory high-risk permissions, identify unverified publishers, and find dormant applications. Remove or verify as appropriate. Document the review for compliance.
Annually
Only after an incident
Daily
Quarterly. Regular enough to catch accumulated risk. Documented for compliance.
17. 8 of 12 compromised users show email access by the app. 4 show zero access. What do you do about the 4?
Revoke all 12 consents. Zero access does not mean safe — the attacker may not have processed those 4 users yet, the logging may be delayed, or the permissions may differ. It is safer to revoke an unexploited consent than to leave one that was exploited.
Those 4 are safe — only revoke 8
Wait to see if access occurs
Only revoke if the user reports an issue
Revoke all 12. Zero access ≠ safe. Safer to revoke all than to miss one.
18. Disabling user consent with a 24-hour admin review SLA maps to which compliance controls?
NIST CSF PR.AC-4 (Access permissions managed). ISO 27001 A.8.9 (Configuration management). SOC 2 CC6.1 (Logical access controls) and CC6.3 (Segregation of duties — the consent decision is separated from the user's access grant and requires admin approval).
NIST CSF DE.AE-2 only
No compliance mapping exists
ISO 27001 A.8.16 only
PR.AC-4, A.8.9, CC6.1, CC6.3. Access permission management + configuration management + segregation of duties.
19. How does M14 fit into the Modules 11-14 attack chain narrative?
M11 (AiTM) = initial access. M12 (BEC) = financial fraud objective. M13 (Token Replay) = session persistence. M14 (Consent Phishing) = application persistence. Each module covers a different persistence and exploitation technique that the attacker uses after initial access. Together they cover every major M365 post-compromise technique.
M14 is standalone — no connection
M14 replaces all previous modules
Only M13 connects to M14
M11 = entry. M12 = financial objective. M13 = session persistence. M14 = application persistence. Complete post-compromise coverage.
20. What is the complete artifact inventory across Modules 11-14?
M11: AiTM playbook + 8 rules + IR template + hardening checklist. M12: BEC playbook + 6 rules + financial fraud checklist + BEC hardening. M13: Token playbook + 5 rules + containment checklist + CAE/token protection guide. M14: OAuth playbook + 5 rules + consent review checklist + governance guide. Total: 4 investigation playbooks, 24 detection rules, 4 hardening/response checklists, 4 deployment/governance guides. All deployable and production-ready.
A collection of study notes
8 detection rules total
One playbook covering all modules
4 playbooks + 24 rules + 4 checklists + 4 guides. Complete operational toolkit. Deployable, not theoretical.