In this module
AD5.3 Navigating the Defender Portal Incident Queue
Figure AD5.3 — The Defender incident queue correlates multiple alerts into single incidents that tell the complete attack story. Your Monday review filters for Active + High/Medium severity, investigates the important ones, and classifies and closes the rest. The goal: zero Active incidents by the end of your review.
Opening the incident queue
Navigate to security.microsoft.com → Incidents & alerts → Incidents. The default view shows all incidents across all time periods and all severities.
For your Monday review, apply these filters immediately:
Status: Active (excludes incidents you've already classified and closed) Severity: High, Medium (excludes informational-only incidents that don't require investigation) Time range: Last 7 days (focuses on the current review period)
Bookmark this filtered URL in your browser. Next Monday, click the bookmark and you're directly in the filtered queue — no navigation or filter setup needed.
The queue shows each incident with: a name (auto-generated based on the attack pattern, e.g., "Multi-stage incident on 1 endpoint reported by multiple sources"), severity (High/Medium/Low/Informational), status (Active/In progress/Resolved), the number of alerts within the incident, the entities affected (users, devices, mailboxes), and the last activity timestamp.
Understanding severity levels
Microsoft assigns severity automatically based on the detection confidence and potential impact:
High: Confirmed or high-confidence detection of a serious threat. Examples: ransomware activity detected, confirmed credential compromise with suspicious post-compromise activity, bulk data exfiltration. Investigate immediately during your Monday review — don't defer these to next week.
Medium: Suspicious activity that may or may not be malicious. Examples: suspicious sign-in properties, unusual inbox rule creation, anomalous PowerShell execution. These are the incidents that require judgment — investigation determines whether it's a real threat or a false alarm.
Low: Detections with lower confidence or limited scope. Examples: a single failed sign-in from an unfamiliar location, a policy tip triggered by a DLP match, a user clicking a rewritten Safe Links URL that was subsequently classified as safe.
Informational: System events, policy applications, and logging events that don't indicate a threat but provide context.
For your Monday review, focus on High and Medium. Low and Informational can be batch-classified and closed — scan the titles, confirm they're genuinely low-risk, and close them.
Working through an incident
Click on an incident to open it. The incident page has several tabs:
Attack story (Summary): The narrative view of the incident. Shows the timeline of events, the entities involved (users, devices, IPs), and the sequence of alerts. This is the first tab to read — it tells you what happened in chronological order. Newer Defender portal versions include an AI-generated summary if Security Copilot is available — but even without Copilot, the attack story tab provides the key facts.
Alerts: The individual alerts that make up the incident. Each alert has its own severity, description, and evidence. Click an alert to see the raw details — the specific log entry, the detection rule that triggered, and the recommended response actions.
Assets: The users, devices, and mailboxes involved. Click a user to see their sign-in history, their risk level, and other incidents they're involved in. Click a device to see its compliance status, its MDE health (if applicable), and its recent activity.
Evidence and response: The specific artifacts — files, URLs, emails, processes — associated with the incident. This tab also shows any automated actions that Defender took (quarantined an email, blocked a URL, isolated a device).
Graph (if available): A visual representation of the attack chain showing relationships between entities, alerts, and evidence. Useful for complex multi-step incidents.
For your Monday review, most incidents need only the Attack story tab and a quick look at Assets. Read the story, check whether the affected user is a real person at your organization, assess whether the activity is genuinely suspicious, classify, and close.
Creating saved filters and tags
The Defender portal supports custom tags that you can add to incidents for your own tracking. Create a consistent tagging system:
"Monday-Review" — tag every incident you review during the Monday check. This creates a historical record of which incidents were triaged during structured review vs discovered ad-hoc.
"Needs-Investigation" — tag incidents that you identified during the Monday review but need deeper investigation beyond the 15-minute timebox. Come back to these after the review is complete.
"Escalated" — tag incidents that you've forwarded to your managed SOC or management.
To tag an incident: open the incident → click "Manage incident" → Tags → type your tag and press Enter. Tags persist across sessions and are searchable — you can filter the incident queue by tag to see all incidents tagged "Needs-Investigation" that are still open.
For the filtered queue URL, bookmark the complete filtered state including severity, status, and time range. The Defender portal preserves filter parameters in the URL. After setting your filters (Active + High/Medium + Last 7 days), copy the browser URL and save it as a bookmark titled "Monday Security Review — Incident Queue."
Bulk operations for queue hygiene
On a normal Monday, your filtered queue shows 3-5 low-severity incidents alongside any medium/high. After investigating the important ones, you need to close the informational events efficiently. The Defender portal supports bulk actions:
Select multiple incidents using the checkboxes → click "Manage incidents" → set classification (e.g., "Informational — Not a threat") → set status to "Resolved" → add a comment "Batch closed during weekly review — informational events, no action required."
This closes 3-5 incidents in 30 seconds instead of opening each one individually. Reserve individual investigation for medium and high-severity incidents; batch-close the rest.
Understanding auto-resolved incidents
Some incidents in your queue may already show status "Resolved" with classification "True positive" — resolved automatically by Microsoft Defender's automated investigation and response (AIR). This happens when Defender has high confidence in the detection AND can take automated remediation (quarantine an email, block a URL, isolate a file).
Review auto-resolved incidents during your Monday check: verify that the automated action was correct by reading the attack story. If Defender quarantined a phishing email that was genuinely phishing, the automation worked correctly — note it in your log. If Defender quarantined a legitimate email (false positive auto-remediation), release the email from quarantine and submit feedback so future detections are more accurate.
Auto-resolved incidents reduce your triage workload but don't eliminate the need for human review. Think of AIR as a junior analyst who handles the obvious cases — you review their work and handle the ambiguous ones.
Note that incident names are auto-generated by Microsoft's correlation engine. Names like "Multi-stage incident on 1 endpoint reported by 2 sources" describe the attack pattern, not the specific details. Don't rely on the name alone — always open the attack story tab for the full context. If your organization uses Microsoft Sentinel alongside Defender XDR, incident names may change when Sentinel alerts are correlated into Defender incidents. Don't use incident names as automation triggers or tracking identifiers — use the incident ID instead.
Classifying and closing incidents
Every incident you review needs a classification. The classification options are:
True positive (TP): The incident describes genuine malicious activity. A phishing email was delivered and a user clicked. An attacker signed in with stolen credentials. Malware was detected on an endpoint. After classifying as TP, take the appropriate response action (AD1.9 compromised account procedure, AD2.10 phishing investigation procedure, etc.).
False positive (FP): The incident was triggered by legitimate activity that the detection engine misidentified. A user traveling internationally triggered impossible travel detection. A legitimate admin tool was flagged as suspicious. A marketing email was flagged as phishing. Classify as FP and close — this feedback improves future detection accuracy.
Benign true positive (BTP): The detection is technically correct (the activity happened) but the activity is legitimate. A security researcher testing phishing defences triggers a phishing detection. An IT admin using PsExec for remote management triggers a lateral movement detection. The detection worked, but the activity is authorized. Classify as BTP and close — consider adding the authorized activity to an exclusion or suppression rule if it triggers repeatedly.
Not a threat / Informational: The incident contains informational events that don't indicate a threat. Classify and close.
After classifying, set the status to "Resolved" and add a brief comment explaining your classification: "FP — user was traveling to Germany, triggered impossible travel detection. Confirmed with user." This comment is your audit trail — it documents the investigation decision for future reference and for any compliance review.
Your Monday review shows 7 Active incidents. 4 are informational (DLP policy tip events), 2 are low (Safe Links URL clicks that were allowed after scanning), and 1 is medium ("Suspicious sign-in activity" for user m.thompson at 03:00 on Saturday from an IP in a country where you have no employees). How do you prioritize your review?
Option A: Start with the informational incidents and work up to the medium — clear the easy ones first.
Option B: Start with the medium-severity incident — it's the most likely to be a genuine threat. Investigate the suspicious sign-in immediately. Then batch-classify and close the 4 informational and 2 low incidents, which takes 2 minutes total.
The correct answer is Option B. The medium-severity incident has the highest risk — a 03:00 sign-in from an unexpected country could be a credential compromise in progress. Every minute of investigation delay extends the attacker's potential access time. The 6 low/informational incidents can wait 10 minutes while you investigate the medium incident. After resolving the medium incident, batch-close the rest.
Try it: Navigate and classify your incident queue
Navigate to security.microsoft.com → Incidents & alerts → Incidents. Apply filters: Status = Active, Severity = High + Medium + Low, Time range = Last 30 days.
Count the total Active incidents. For each incident:
1. Click the incident to open it 2. Read the Attack story tab — who is affected, what happened, when 3. Check the Assets tab — is the affected user a real person? Is the affected device enrolled? 4. Make a classification decision: TP (investigate further), FP (close), BTP (close), or Informational (close) 5. For FP/BTP/Informational: set status to "Resolved," add a comment, close 6. For TP: investigate using the appropriate module's response procedure
After closing all incidents, bookmark the filtered queue URL. Next Monday, start here.
Time the exercise: how long did it take to review and classify all incidents? Your target is under 5 minutes for the incident queue check portion of your Monday review.
You're reading the free modules of M365 Security: From Admin to Defender
The full course continues with advanced topics, production detection rules, worked investigation scenarios, and deployable artifacts.