8.10 Module Assessment

90 minutes · Module 8

Module 8 — Final Assessment

Key takeaways

  • Email is the #1 attack vector — Defender for Office 365 is the primary prevention layer
  • EOP provides baseline protection (spam, malware, basic phishing). Defender adds Safe Links, Safe Attachments, advanced anti-phishing, and investigation tools.
  • P1 provides protection. P2 adds investigation (Threat Explorer, AIR, Campaign Views, Advanced Hunting).
  • Anti-phishing: protect executives and finance team by name + protect your domains and partner domains. Start at threshold 2.
  • First contact safety tip is a low-cost, high-value control — enable for all users
  • Safe Links: enable time-of-click scanning. Disable "let users click through." Track all clicks.
  • Safe Links fails against anti-analysis CAPTCHAs — first contact safety tips and user training fill this gap
  • Safe Attachments: Dynamic Delivery is the best balance (same detection, no email delay)
  • ZAP is a safety net, not a primary control. The ZAP gap (23+ minutes) means phishing emails are accessible before cleanup.
  • Email authentication (SPF, DKIM, DMARC) verifies origin, NOT safety. Attackers configure authentication on their own domains.
  • Deploy DMARC in stages: none (monitoring) -> quarantine -> reject. Never skip to reject without data.
  • Transport rules provide pre-delivery blocking. Kit URL patterns are the most effective transport rule condition.
  • Threat Explorer for email investigation. Advanced Hunting for cross-domain correlation.
  • Soft delete first, hard delete only when confirmed malicious — reversibility matters for bulk actions
  • Email AIR automates investigation and remediation. Start with require-approval, increase automation with confidence.

Final assessment (12 questions)

1. What protection does Defender for Office 365 add beyond EOP?

Safe Links (URL time-of-click scanning), Safe Attachments (sandbox detonation), advanced anti-phishing (impersonation, mailbox intelligence), and with P2: Threat Explorer, AIR, Campaign Views, and Advanced Hunting.
Better spam filtering
Email encryption

2. The Module 13 phishing email passed Safe Links. Why?

A Cloudflare CAPTCHA blocked automated URL scanning. Safe Links saw the CAPTCHA page, not the phishing proxy. Only human visitors who solved the CAPTCHA reached the attack page.
Safe Links was disabled
The URL was on the allow list

3. Why set "Let users click through" to No in Safe Links policy?

With click-through enabled, users can override Safe Links blocks — exactly what happened in Module 13 where 5 of 7 users clicked through the warning. Disabling click-through makes Safe Links blocks absolute — maximum protection at the cost of rare false positive frustration.
It improves scanning speed
It reduces Safe Links costs

4. Dynamic Delivery vs Block mode for Safe Attachments — which and why?

Dynamic Delivery — same detection (sandbox detonation) without email delivery delay. Users receive the email body immediately, attachment available within 1-2 minutes. Block mode delays the entire email, which is noticeable at scale.
Block — maximum security always
Monitor — least user impact

5. A phishing email passes SPF, DKIM, and DMARC. Is it safe?

No. Authentication verifies origin, not safety. The attacker owns the domain and configured authentication correctly. Passing proves "this email is from attacker-domain.com" — not that the domain is trustworthy.
Yes — all protocols passed
Depends on DMARC policy

6. ZAP removed 4 of 23 phishing emails after 23 minutes. What is the gap?

23 minutes of exposure where all 23 emails were accessible in inboxes. 19 were not removed by ZAP (users had already interacted with them). The ZAP gap is the time between delivery and remediation — minimize it through better initial detection, not by relying on ZAP.
ZAP failed on 19 emails
23 minutes is normal and acceptable

7. You want to deploy DMARC reject for your domain. What is the correct sequence?

p=none (30 days monitoring, identify all legitimate senders) -> add all senders to SPF, configure DKIM -> p=quarantine (30 days, verify no legitimate email goes to Junk) -> p=reject. Skipping to reject blocks legitimate third-party email.
Deploy reject immediately for maximum protection
None is sufficient — reject is too aggressive

8. Why is a URL pattern transport rule more effective than a domain-based block?

Domains are disposable — the attacker registers new ones per wave. The URL pattern is generated by the phishing kit and stays consistent across all domains. One transport rule catches all waves.
Transport rules cannot block domains
URL patterns are harder to change

9. Threat Explorer vs Advanced Hunting for email investigation — when do you use each?

Threat Explorer for email-specific investigation (campaign view, bulk remediation, delivery timeline). Advanced Hunting for cross-domain correlation (joining email data with sign-in logs, building detection rules). Use both during a phishing investigation.
Threat Explorer always — it has a better interface
Advanced Hunting always — KQL is more powerful

10. You identify a phishing campaign in 200 mailboxes. Soft delete or hard delete?

Soft delete first. Verify the campaign is malicious. Then hard delete if confirmed. Soft delete is reversible — a false positive hard deletion across 200 mailboxes cannot be undone.
Hard delete — speed matters
Leave them and block the sender

11. First contact safety tip shows "You don't often get email from this sender." Why is this effective against phishing?

Phishing campaigns come from new domains the user has never seen before. The safety tip alerts them to verify the sender before interacting. It is a low-friction security awareness mechanism built into the email itself — no training required.
It blocks the email
It only works for internal emails

12. Email AIR auto-remediates 69% of email threats. The remaining 31% are pending approval. Should you increase automation?

Review the 31% pending items. If they are consistently approved without changes, the detection is accurate and automation can be increased. If some are rejected or modified, keep the approval requirement — human review is catching genuine edge cases.
Yes — 69% proves it works
No — always require human approval