In this module
AD2.9 User-Reported Phishing Workflow
Figure AD2.9 — The user-reported phishing workflow. User clicks "Report Phishing" → email routed to Microsoft and your admin queue → admin reviews and takes action → user receives feedback on their report. The feedback loop is critical — users who never hear back stop reporting.
Configuring the Report Message add-in
The Report Message add-in is built into modern Outlook clients (Outlook for Windows, Outlook for Mac, Outlook on the web, and Outlook mobile). For newer Outlook versions, the "Report" button appears automatically in the ribbon. For older versions, you may need to deploy the add-in via the Microsoft 365 Admin Center → Settings → Integrated apps → search for "Report Message."
To configure where reported messages go, navigate to security.microsoft.com → Email & collaboration → Policies & rules → Threat policies → User reported settings.
User reporting: Set to "On" — this enables the reporting button for all users.
Send reported messages to: Select "Microsoft and my reporting mailbox." This dual routing ensures Microsoft receives the sample for global filter improvement AND you receive the report for local investigation. Enter your admin or shared security mailbox address for the reporting mailbox (e.g., phishing-reports@northgateeng.com).
User reporting experience: Enable "Show a success message after the user reports." This is the feedback that reinforces reporting behavior. If users report and never hear anything, they stop reporting. The success message confirms their report was received.
Results email notifications: Enable this if available. When Microsoft completes analysis of the reported email, the user receives a notification with the verdict. This closes the feedback loop — the user knows whether they caught a real threat or a false alarm.
The admin review workflow
Reported messages appear in security.microsoft.com → Email & collaboration → Submissions → User reported tab. This is the queue you check weekly.
Each reported message shows: the reporter, the subject, the sender, the delivery location, and Microsoft's automatic verdict (phishing, spam, or not a threat). For most reports, Microsoft's verdict is sufficient — if they say it's phishing, it's phishing. If they say it's clean, it's clean.
Your review process: scan the queue for reports where Microsoft's verdict says "Phishing" or "Malware." For these, check whether other users received the same email using message trace. If the phishing email was sent to multiple users, use the Threat Explorer (P2) or message trace to find all instances and remediate — quarantine or delete the email from all affected mailboxes.
For reports where Microsoft says "Not a threat," check whether the user had a reasonable basis for reporting. A legitimate email from a new sender that looked suspicious is a reasonable report — thank the user (informally or through the feedback mechanism) for their vigilance. A newsletter the user doesn't want is not a phishing report — that's an unsubscribe, not a security event.
Communicating the reporting workflow to users
Deploy the Report Message functionality with a brief communication to all users: "If you receive a suspicious email, click the 'Report' button in Outlook and select 'Phishing.' This sends the email to our security team for review. You'll receive feedback on whether the email was a real threat. Reporting suspicious emails helps protect the entire organization — it takes 2 seconds and could prevent a security incident."
Avoid the mistake of over-training users with complex decision trees ("if the email has a link, check the URL; if the domain looks wrong, compare it to the sender address..."). Users are not security analysts. The message should be simple: "If it looks wrong, report it. We'll handle the analysis." A 5-second decision with a one-click action gets more reports than a 5-minute evaluation process that most users skip.
Deploying the Report Message add-in via Intune
If the Report Message button doesn't appear automatically in your users' Outlook, deploy it centrally. Navigate to the Microsoft 365 Admin Center → Settings → Integrated apps → Get apps. Search for "Report Message" by Microsoft Corporation. Click "Get it now" and assign it to all users.
The deployment takes 12-24 hours to propagate to all Outlook clients. After deployment, users see a "Report" button in the Outlook ribbon (desktop), the message toolbar (web), or the overflow menu (mobile).
To verify the add-in is deployed, check with PowerShell:
Connect-ExchangeOnline
Get-App -OrganizationApp | Where-Object { $_.DisplayName -like "*Report*" } |
Select-Object DisplayName, Enabled, DefaultStateForUserThe DefaultStateForUser should be "Enabled" — meaning the add-in is active for all users by default without requiring individual installation.
Tracking reporting metrics
Monitor user reporting volume monthly to gauge engagement. Navigate to security.microsoft.com → Email & collaboration → Submissions → User reported. Export the last 30 days.
Key metrics to track: total reports per month (increasing = good, users are engaged), percentage that are true positives (typically 10-30% — the rest are false positives, which is normal), top reporting users (these are your most security-conscious employees — consider them for security champion roles), and response time (how long between report and admin review).
If reporting volume drops to near zero after the initial deployment, users have stopped using the button — either because they forgot it exists (send a reminder), they never heard back about their reports (fix the feedback loop), or they got frustrated with the process (simplify it). The feedback mechanism (Microsoft's automated verdict notification to the user) is critical for sustained engagement — users who see "This was phishing — thank you for reporting" continue reporting. Users who hear nothing stop.
For your quarterly management report, include: "X phishing emails were reported by users this quarter. Y were confirmed threats and remediated. User reporting caught Z threats that automated filters missed." This demonstrates that your security awareness investment is producing measurable results.
Measuring user readiness with phishing simulations
If you have Defender for Office 365 Plan 2 (E5) or the Attack Simulation Training add-on, you can send simulated phishing emails to your users and measure who clicks. Navigate to security.microsoft.com → Email & collaboration → Attack simulation training. Microsoft provides pre-built simulation templates that mimic real phishing campaigns — fake password reset pages, fake document sharing, fake IT notifications.
With E3 (no Attack Simulation Training), you can still test informally. Send a test email from a personal account to a small pilot group (with management approval) that mimics a common phishing pattern — a "please verify your account" email with a link to a benign page you control. Track who clicks. This isn't as sophisticated as the built-in simulation tool, but it gives you a baseline click rate.
The metrics that matter from simulations: click-through rate (percentage of users who click the link), credential entry rate (percentage who enter credentials on the simulated page), and report rate (percentage who report the simulation using the Report Message button). The ideal outcome is a low click rate and a high report rate — users recognize the phishing and report it instead of clicking.
Run simulations quarterly. Track the trend over time. A declining click rate demonstrates that your combination of technical controls (catching phishing before users see it) and user awareness (training users to recognize what gets through) is working. Include the simulation results in your quarterly management report alongside the user-reported phishing metrics — this builds the evidence that security awareness produces measurable behavioral change, not just "training completed" checkboxes.
Three users report the same phishing email within 15 minutes. The email claims to be from Microsoft and asks users to "verify their account" by clicking a link. Microsoft's automated verdict says "Phishing." You confirm it's a credential harvesting page. What do you do beyond the automated response?
Option A: Nothing — the reporters are covered, and the phishing email will be caught by Safe Links if other users click it.
Option B: Use message trace to find all instances of this email across your tenant, quarantine or delete them from all mailboxes, check the sign-in log for any users who may have entered credentials, and send a brief org-wide alert about this specific phishing campaign.
The correct answer is Option B. Three reports in 15 minutes means the email was sent to many users. The three who reported are safe. The users who didn't report may still have the email in their inbox — and some of them may click. Proactive remediation (removing the email from all mailboxes) is faster than waiting for every user to report or for Safe Links to catch every click. The sign-in log check identifies any users who entered credentials before the email was removed. The org-wide alert raises awareness about this specific campaign while it's fresh.
Try it: Configure user reporting and test it
Navigate to security.microsoft.com → Policies & rules → Threat policies → User reported settings. Enable reporting, set routing to Microsoft + your admin mailbox, and enable feedback notifications.
Test it: send yourself a test email (from a personal account, simulating a suspicious external email). Open it in Outlook and click "Report" → "Phishing." The email should move to your Junk folder and a copy should appear in your admin reporting mailbox.
Check the Submissions page (security.microsoft.com → Submissions → User reported). Your test report should appear within a few minutes. Note the automatic verdict and the available actions.
If the Report button doesn't appear in your Outlook client, check: is the add-in deployed? Is the Outlook version current? For Outlook on the web, the Report button is in the message toolbar. For Outlook desktop, it's in the ribbon under "Report Message."
You're reading the free modules of M365 Security: From Admin to Defender
The full course continues with advanced topics, production detection rules, worked investigation scenarios, and deployable artifacts.