9.2 Anti-Phishing Policies
Anti-Phishing Policies
By the end of this subsection, you will be able to configure impersonation protection for executives and domains, enable mailbox intelligence, set phishing thresholds, and monitor policy effectiveness with KQL.
Anti-phishing policies in Defender for Office 365 go beyond basic spam filtering. They use machine learning and heuristic analysis to detect impersonation attempts, spoof emails, and sophisticated phishing that passes signature-based checks.
Impersonation protection — users and domains
User impersonation protects specific high-value users. The attacker sends an email that appears to come from the CFO (e.g., “John Smith” but from john.smith@attacker-domain.com). The policy detects the display name match against the protected user list.
Domain impersonation protects your organization’s domains and partner domains. The attacker uses a lookalike domain (northgateeng.co instead of northgateeng.com). The policy detects the visual similarity.
| Protection type | What to configure | Recommended action |
|---|---|---|
| Protected users | Add executives, finance team, IT admins (up to 350 users) | Quarantine the message |
| Protected domains | Add your domains + key partner/vendor domains | Quarantine the message |
| Mailbox intelligence | Enable (learns each user’s communication patterns) | Move to Junk folder |
| Mailbox intelligence impersonation | Enable (combines MI with impersonation detection) | Quarantine the message |
BEC attacks target finance team members by impersonating executives or vendors requesting payment changes. Add every member of your finance and accounts payable team to the protected users list, plus every executive whose name an attacker might impersonate. This directly prevents the follow-on attack that the Module 14 attacker was preparing for (financial reconnaissance for BEC).
First contact safety tip
When a user receives an email from a sender for the first time, a safety tip appears: “You don’t often get email from this sender.” This is a low-friction security awareness mechanism — it does not block the email but alerts the user to verify the sender.
Enable this for all users. It is especially effective against first-wave phishing where the sender domain has never been seen before (exactly the Module 14 scenario — northgate-voicemail.com was a first-time sender for every recipient).
Spoof intelligence
Spoof intelligence detects emails where the “From” address domain does not match the actual sending infrastructure (SPF/DKIM failure or domain mismatch). Defender maintains a spoof intelligence insight showing which senders are spoofing your domain and whether they are legitimate (marketing platforms, third-party services) or malicious.
Review the spoof intelligence insight monthly. Legitimate spoofing senders (your email marketing platform sending “from” your domain) should be explicitly allowed. Everything else should be blocked.
Phishing threshold
The anti-phishing threshold controls how aggressively Defender classifies emails as phishing:
| Threshold | Behavior | Best for |
|---|---|---|
| 1 — Standard | Default sensitivity | Starting point |
| 2 — Aggressive | More emails classified as phishing | Most organizations after initial tuning |
| 3 — More aggressive | Significantly more phishing detections | Organizations with high phishing volume |
| 4 — Most aggressive | Maximum detection, higher FP risk | Only if you have a dedicated team reviewing quarantine |
The default threshold (1) misses a meaningful percentage of sophisticated phishing. Threshold 2 catches significantly more with minimal false positive increase. Move to 3 only after running 2 for 30 days and confirming quarantine volume is manageable. Threshold 4 generates enough quarantine volume to require daily admin review.
Monitoring anti-phishing effectiveness
| |
| Date | TotalPhish | Delivered | Blocked | Junked | ZAPRemoved |
|---|---|---|---|---|---|
| Mar 15 | 47 | 8 | 31 | 4 | 4 |
| Mar 16 | 23 | 3 | 17 | 2 | 1 |
| Mar 17 | 142 | 19 | 108 | 8 | 7 |
Spoof intelligence management
Review the spoof intelligence insight monthly in the Defender portal (Email & collaboration → Policies → Anti-phishing → Spoof intelligence insight).
What you will see:
- Senders that appear to be spoofing your domain or partner domains
- Whether each sender passed or failed authentication
- Whether you have allowed or blocked each sender
Common legitimate spoofers to allow:
- Marketing platforms (Mailchimp, HubSpot) sending “from” your domain
- CRM systems (Salesforce) sending notifications “from” your domain
- Ticketing systems (ServiceNow, Zendesk) sending “from” your domain
- Third-party email signatures services
Action: Allow known legitimate senders. Block everything else. Review monthly — new third-party services are onboarded regularly.
Required role and blast radius
Required role: Security Administrator (minimum for policy changes). Exchange Administrator also works but grants broader permissions than needed.
Configuration walkthrough — anti-phishing policy
Navigate to Defender portal → Email & collaboration → Policies & rules → Threat policies → Anti-phishing.
Step 1: Edit the default policy. Every organisation has a default anti-phishing policy that applies to all users. Edit it rather than creating a new one — new policies require explicit user/group assignment and can leave gaps in coverage.
Step 2: Enable impersonation protection.
Add protected users: Navigate to the “Users to protect” section. Add each executive and finance team member by name and email address. Maximum 350 users. Start with: CEO, CFO, CTO, CISO, Finance Director, Accounts Payable lead, Payroll lead, IT Director.
Add protected domains: Navigate to “Domains to protect.” Enable “Include the domains I own” (automatically protects all accepted domains). Add partner/vendor domains that attackers might impersonate — start with your top 10 vendors by payment volume.
Blast radius: Impersonation protection may quarantine legitimate email from senders whose display name matches a protected user but whose domain differs (common for external consultants who use the executive’s name in their display name). Review quarantine weekly for the first month.
Rollback: Remove users/domains from the protected list. Previously quarantined email can be released from quarantine.
Verify: After enabling, run the monitoring query from this subsection daily for 7 days. Track the ratio of detected phishing to false positives in quarantine.
| |
If both columns show counts after enabling: impersonation protection is detecting threats. If zero after 7 days: either no impersonation attempts occurred (possible in a low-volume environment) or the policy is not configured correctly — verify in the Defender portal that the policy shows “Applied” status.
Step 3: Enable mailbox intelligence.
Mailbox intelligence learns each user’s communication patterns over 30 days. After the learning period, it detects emails from senders that look like known contacts but are not (e.g., the sender’s display name matches a frequent contact, but the domain is different).
Enable: Anti-phishing policy → Mailbox intelligence → On. Mailbox intelligence for impersonation protection → On. Action: Move to Junk (not quarantine — mailbox intelligence has a higher false positive rate than explicit user/domain impersonation protection during the first 30 days while it learns).
Step 4: Set the phishing threshold to 2.
Anti-phishing policy → Advanced phishing thresholds → 2 (Aggressive). Monitor quarantine volume for 30 days. If false positive rate is acceptable: leave at 2. If quarantine contains more than 5-10 legitimate emails per week: review the false positives, add sender allows for legitimate senders, and maintain threshold 2.
| |
A stable quarantine volume (no sudden spike) after changing from threshold 1 to 2 indicates the change is catching additional threats without generating excessive false positives.
NIST CSF: DE.CM-4 (Malicious code is detected). ISO 27001: A.8.23 (Web filtering), A.8.7 (Protection against malware). SOC 2: CC6.8 (Prevent or detect against unauthorized or malicious software). Anti-phishing policies are the primary email-layer control that auditors verify.
Advanced anti-phishing: mailbox intelligence deep dive
Mailbox intelligence is the ML-powered component of anti-phishing. It learns each user’s communication patterns over 30 days and then flags emails that deviate from those patterns.
What it learns: Frequent correspondents (the people each user regularly emails), communication style (typical greetings, sign-offs, and tone), sending infrastructure (which IPs and domains legitimate contacts use), and timing patterns (when the user typically receives email from each contact).
What it detects: An email that appears to be from a frequent correspondent but originates from unfamiliar infrastructure. Example: j.morrison regularly receives email from sarah.jones@meridian-precision.co.uk (sent from Meridian’s Exchange server at 198.51.100.10). A phishing email claims to be from sarah.jones but originates from 203.0.113.47 (the attacker’s infrastructure). The display name and even the email address may match — but the sending infrastructure does not. Mailbox intelligence flags the discrepancy.
The 30-day learning period is critical. During the first 30 days after enabling, mailbox intelligence is learning — it generates fewer detections and may produce more false positives. Do not judge its effectiveness in the first month. After 30 days, it has built a communication graph for each user and begins detecting impersonation with high accuracy.
| |
If mailbox intelligence detections appear after the 30-day learning period: the ML model is working. If zero after 60 days: either no impersonation attempts occurred (possible), or the model is not configured correctly (check that mailbox intelligence AND mailbox intelligence for impersonation protection are both enabled in the policy).
Anti-phishing policy for multiple departments
Large organisations may need different anti-phishing settings per department. Example: the finance team needs threshold 3 (most aggressive) because they are the primary BEC target. The engineering team can operate at threshold 2 because they receive fewer financial-themed phishing.
Implementation: Create a custom anti-phishing policy with condition “The recipient is a member of: Finance Department.” Set threshold 3. Protected users: finance executives + accounts payable. The default policy (threshold 2) applies to all other users.
Policy precedence: Custom policies take priority over the default policy. A finance user matching the custom policy condition receives threshold 3. All other users receive the default (threshold 2). Verify: Defender → Policies → Anti-phishing → check policy order.
Anti-phishing reporting and metrics
Track these metrics monthly to measure anti-phishing policy effectiveness:
Delivery rate: Phishing emails delivered ÷ total phishing detected. Target: under 10%. Calculate from the monitoring query: Delivered / TotalPhish * 100. If consistently above 10%: increase the phishing threshold or add more protected users/domains.
Impersonation detection rate: How many impersonation attempts are caught vs missed. Calculate: impersonation detections (from the MI query) ÷ total phishing delivered. A steady increase after enabling mailbox intelligence indicates the ML model is learning.
False positive rate in quarantine: Legitimate emails quarantined as phishing ÷ total quarantined emails. Review quarantine weekly. Target: under 5% false positive rate. If higher: the threshold may be too aggressive, or specific senders need to be added to the allow list.
Protected user coverage: Number of protected users ÷ number of high-value users. Target: 100% of executives, finance team, and IT admins. Review quarterly — new hires in these roles should be added immediately.
| |
The EffectiveDelivery metric (delivered minus ZAP-removed) is your true exposure — emails that reached inboxes AND stayed there. This is the number to report to management and to drive policy tuning decisions.
Anti-phishing policy testing
After configuring the policy, test it. Send test phishing emails to verify detection:
Test 1: User impersonation. From an external account, send an email with the display name of a protected user (e.g., “John Smith, CFO”) to an internal recipient. Expected result: quarantined by impersonation protection.
Test 2: Domain impersonation. Register a lookalike test domain (e.g., northgateeeng.com — extra ’e’). Send an email from that domain. Expected result: caught by domain impersonation protection.
Test 3: First contact safety tip. From a never-before-seen external address, send a benign email. Expected result: the recipient sees the “You don’t often get email from this sender” safety tip.
Test 4: Phishing threshold. Use the Attack Simulation Training feature (P2) to send a simulated phishing campaign. Track: how many were caught by anti-phishing, how many were delivered, and how many users clicked.
Document test results. If any test fails: the policy configuration needs adjustment. Re-test after changes.
Try it yourself
The email should be caught by user impersonation protection — the display name matches a protected user but the domain does not. The email is quarantined (if that is your configured action) or moved to junk. Check the EmailEvents table: ThreatTypes should show "Phish" with the detection technology indicating impersonation. If the email was delivered: verify the protected user list is correctly configured and the policy is enabled.
Try it yourself
1. Investigate the 19 emails: Are they from a single campaign (same domain/URL pattern)? Use Threat Explorer or the KQL campaign scope query.
2. Remediate: Soft-delete from all affected mailboxes via Threat Explorer.
3. Check clicks: Query UrlClickEvents — did anyone click?
4. Contain compromised users: For anyone who clicked + entered credentials, execute the Module 14.7 containment sequence.
5. Block future waves: Transport rule for the URL pattern or sender domain.
6. Tune policy: Increase anti-phishing threshold from 1 to 2. Re-run the monitoring query after 7 days — did the Delivered count decrease?
7. If threshold 2 still shows high delivery: Increase to 3 and monitor quarantine volume for false positives.
Check your understanding
1. Why protect individual users (executives, finance) in anti-phishing policy instead of just protecting domains?
2. Your anti-phishing monitoring shows 19 phishing emails delivered to inboxes on a single day. What is your immediate action?