Final assessment (12 questions)
1. What protection does Defender for Office 365 add beyond EOP?
Safe Links (URL time-of-click scanning), Safe Attachments (sandbox detonation), advanced anti-phishing (impersonation, mailbox intelligence), and with P2: Threat Explorer, AIR, Campaign Views, and Advanced Hunting.
Better spam filtering
Email encryption
EOP handles known threats. Defender handles unknown and sophisticated threats — zero-day URLs, unknown malware, impersonation attacks.
2. The Module 13 phishing email passed Safe Links. Why?
A Cloudflare CAPTCHA blocked automated URL scanning. Safe Links saw the CAPTCHA page, not the phishing proxy. Only human visitors who solved the CAPTCHA reached the attack page.
Safe Links was disabled
The URL was on the allow list
Anti-analysis CAPTCHAs are the primary evasion technique against automated URL scanning. Understanding this limitation prevents false confidence in Safe Links as a complete solution.
3. Why set "Let users click through" to No in Safe Links policy?
With click-through enabled, users can override Safe Links blocks — exactly what happened in Module 13 where 5 of 7 users clicked through the warning. Disabling click-through makes Safe Links blocks absolute — maximum protection at the cost of rare false positive frustration.
It improves scanning speed
It reduces Safe Links costs
A warning that users can ignore is a suggestion, not a control. Module 13 proved that most users ignore it. Remove the option to create an effective control.
4. Dynamic Delivery vs Block mode for Safe Attachments — which and why?
Dynamic Delivery — same detection (sandbox detonation) without email delivery delay. Users receive the email body immediately, attachment available within 1-2 minutes. Block mode delays the entire email, which is noticeable at scale.
Block — maximum security always
Monitor — least user impact
Same security, better UX. Dynamic Delivery is the correct default for commercial organizations. Block mode is for environments where users must not see any part of a suspicious email.
5. A phishing email passes SPF, DKIM, and DMARC. Is it safe?
No. Authentication verifies origin, not safety. The attacker owns the domain and configured authentication correctly. Passing proves "this email is from attacker-domain.com" — not that the domain is trustworthy.
Yes — all protocols passed
Depends on DMARC policy
The most dangerous misconception in email security. Authentication protects your domain from spoofing. It does not protect you from attacker-owned domains.
6. ZAP removed 4 of 23 phishing emails after 23 minutes. What is the gap?
23 minutes of exposure where all 23 emails were accessible in inboxes. 19 were not removed by ZAP (users had already interacted with them). The ZAP gap is the time between delivery and remediation — minimize it through better initial detection, not by relying on ZAP.
ZAP failed on 19 emails
23 minutes is normal and acceptable
ZAP is post-delivery cleanup, not real-time protection. The gap is inherent to the mechanism. Better initial detection (aggressive thresholds, transport rules) is always superior to post-delivery remediation.
7. You want to deploy DMARC reject for your domain. What is the correct sequence?
p=none (30 days monitoring, identify all legitimate senders) -> add all senders to SPF, configure DKIM -> p=quarantine (30 days, verify no legitimate email goes to Junk) -> p=reject. Skipping to reject blocks legitimate third-party email.
Deploy reject immediately for maximum protection
None is sufficient — reject is too aggressive
DMARC deployment is a staged process. Each stage gives you data to refine before increasing enforcement. Skipping stages risks blocking legitimate business email.
8. Why is a URL pattern transport rule more effective than a domain-based block?
Domains are disposable — the attacker registers new ones per wave. The URL pattern is generated by the phishing kit and stays consistent across all domains. One transport rule catches all waves.
Transport rules cannot block domains
URL patterns are harder to change
IOC durability: domains are minutes to register, URL patterns are baked into kit code. Same principle as Module 13 — detect the kit, not the infrastructure.
9. Threat Explorer vs Advanced Hunting for email investigation — when do you use each?
Threat Explorer for email-specific investigation (campaign view, bulk remediation, delivery timeline). Advanced Hunting for cross-domain correlation (joining email data with sign-in logs, building detection rules). Use both during a phishing investigation.
Threat Explorer always — it has a better interface
Advanced Hunting always — KQL is more powerful
Different tools for different tasks. Campaign View and bulk remediation are Threat Explorer strengths. Cross-table joins and detection rule creation are Advanced Hunting strengths. Master both.
10. You identify a phishing campaign in 200 mailboxes. Soft delete or hard delete?
Soft delete first. Verify the campaign is malicious. Then hard delete if confirmed. Soft delete is reversible — a false positive hard deletion across 200 mailboxes cannot be undone.
Hard delete — speed matters
Leave them and block the sender
Reversibility for bulk actions. Soft delete removes the emails from user view (same user experience as hard delete) while keeping them recoverable for 14 days.
11. First contact safety tip shows "You don't often get email from this sender." Why is this effective against phishing?
Phishing campaigns come from new domains the user has never seen before. The safety tip alerts them to verify the sender before interacting. It is a low-friction security awareness mechanism built into the email itself — no training required.
It blocks the email
It only works for internal emails
The Module 13 phishing domain (northgate-voicemail.com) was a first-time sender for every recipient. The first contact safety tip would have flagged every one of those 23 emails. Zero configuration per email — it works automatically for any new sender.
12. Email AIR auto-remediates 69% of email threats. The remaining 31% are pending approval. Should you increase automation?
Review the 31% pending items. If they are consistently approved without changes, the detection is accurate and automation can be increased. If some are rejected or modified, keep the approval requirement — human review is catching genuine edge cases.
Yes — 69% proves it works
No — always require human approval
The decision is data-driven, not dogmatic. If the 31% pending are all approved = increase automation. If some are rejected = keep the current level. The data tells you when it is safe to advance.