AD0.6 Reading Security Alerts for the First Time

4-5 hours · Module 0 · Free
Operational Objective
The first time you open the Microsoft Defender portal and see security alerts, the experience is overwhelming. Red severity badges, unfamiliar terminology like "AiTM," "credential access," and "impossible travel," and a queue of incidents you don't know how to evaluate. The natural responses are to panic (every alert looks critical), to ignore (close the tab and hope someone else handles it), or to click randomly and make things worse. None of these are productive. This subsection teaches you to read the Defender portal incident queue calmly and systematically — understanding what alerts mean, which ones require immediate action, which ones can wait, and which ones are noise you can safely close.
Deliverable: The ability to navigate the Defender portal incident queue, read alert details, distinguish between severity levels, and make a confident triage decision: act now, investigate later, or close as false positive.
Estimated completion: 30 minutes
ALERT SEVERITY AND YOUR RESPONSEHIGH / CRITICALActive compromise likelyCredential theft detectedMalware executedData exfiltration indicatorsAct now. First 15 min.Triage → Contain → Escalate~5-10% of all alertsMEDIUMSuspicious but not confirmedUnusual sign-in patternPolicy violationAnomalous activityInvestigate within 24 hoursCheck sign-in logs → Decide~20-30% of all alertsLOWMinor policy hitInformational detectionKnown FP patternUser-reported phishingReview weeklyBatch review on Monday AM~40-50% of all alertsINFORMATIONALSystem notificationsConfiguration advisoriesHealth reportsResolved automaticallySkim monthlyNoise · Safe to ignore daily~20-30% of all alerts

Figure AD0.6 — Alert severity levels and your response cadence. High/Critical alerts need immediate attention. Medium alerts need investigation within 24 hours. Low alerts are reviewed in weekly batches. Informational alerts are skimmed monthly. The percentages are approximate — your tenant's ratio depends on its size, configuration, and how many users click phishing links.

The incident queue: where to start

Navigate to security.microsoft.com and click “Incidents & alerts” in the left navigation. The incident queue shows security events that Microsoft has detected, correlated, and prioritised. Each row in the queue is an incident — a group of related alerts that tell a connected story.

The queue defaults to showing active incidents sorted by most recent. Each incident has a title (describing what happened), a severity (critical, high, medium, low, informational), an investigation state (new, in progress, resolved), and a list of affected entities (users, devices, mailboxes). The title is often generic — “Multi-stage incident involving phishing and credential theft” — but the severity and affected entities tell you how urgently you need to respond.

The first thing you do with the incident queue is filter. Click the severity filter and select “High” and “Critical.” This immediately reduces the queue to the alerts that actually need your attention right now. On a typical day in a 200-user tenant, you’ll see zero to two high/critical incidents. Most weeks, you’ll see none. If you see five or more, something is either wrong with your environment or wrong with your configuration (too many false positives from overly sensitive policies).

How to read an incident

Click on any incident to see its detail page. The detail page has several tabs, but three matter most for initial triage.

The Summary tab shows the attack story — what happened, in what order, and which entities were involved. Microsoft’s AI generates a narrative that connects the alerts into a timeline. Read this first. It tells you whether this is a phishing attempt that was blocked (low urgency), a credential compromise that’s active (high urgency), or a false positive triggered by legitimate user behaviour (no urgency).

The Evidence and response tab shows the specific artifacts — emails, files, URLs, IP addresses, user accounts — that triggered the alerts. This is where you see the actual phishing email, the attacker’s IP address, the compromised user account, and the malicious URL. Each artifact has a verdict: malicious, suspicious, or clean.

The Alerts tab lists the individual alerts that make up the incident. Each alert has its own severity, detection source (Defender for Office 365, Entra ID Protection, Defender for Endpoint), and description. Multiple alerts in one incident usually mean the attack progressed through multiple phases — phishing delivery, credential use, and post-compromise activity.

The five-minute triage decision

For an IT administrator, the triage decision is straightforward. Read the incident summary and ask three questions: Is this real or a false positive? If it’s real, is the attacker still active? What do I need to do in the next 15 minutes?

For the majority of alerts, the answer is: it’s a false positive or a blocked attack, the attacker is not active, and you don’t need to do anything except mark the incident as resolved. A phishing email that was blocked by Safe Links and never reached the user is an alert, not an incident requiring response. A sign-in from an unusual location that turns out to be an employee travelling is a false positive, not a compromise. Mark these as resolved with a classification (true positive, false positive, or benign) and move on.

For the minority of alerts where the answer is “yes, this looks real” — a user’s credentials appear to be compromised, or malware executed on an endpoint — your response is the 15-minute procedure: check the sign-in log for the affected user, look for signs the attacker used the credentials, and if confirmed, reset the password and revoke sessions. Module AD1.9 covers this procedure step by step.

Classifying and closing incidents — the practical workflow

When you close an incident, the Defender portal asks you to classify it. This isn’t bureaucratic busywork — your classifications train Microsoft’s detection models for your tenant and build your own historical record of what’s real versus noise. Here’s the classification system.

True positive — the alert detected real malicious activity. A phishing email that delivered credentials to an attacker, a compromised account that was used for BEC, or malware that executed on an endpoint. These get classified as true positive even if you contained the threat quickly.

False positive — the alert triggered on legitimate activity. An employee signing in from a hotel abroad that triggered an impossible travel alert. A developer running a penetration testing tool that triggered a malware detection. An admin running a PowerShell script that triggered a suspicious command execution alert. False positives are normal — the goal is to reduce them over time by tuning policies.

Benign true positive — the alert detected something real but not malicious. A security testing tool running as expected, or a user forwarding their email to a personal account with manager approval. The activity happened, the detection was correct, but it’s authorised.

To close an incident: click the incident, click “Manage incident” in the top bar, set the status to “Resolved,” select the classification, and optionally add a comment explaining your decision. Do this for every incident you review — even the obvious false positives. Unresolved incidents pile up and make the queue unusable.

Setting up alert notifications so the portal comes to you

The most effective monitoring isn’t checking a portal — it’s receiving a notification when something needs attention. Configure email notifications so high-severity incidents alert you immediately.

Navigate to security.microsoft.com → Settings → Microsoft Defender XDR → Email notifications → Incidents. Click “Add notification rule.” Configure it as follows:

Rule name: “High and Critical Incidents” Severity: Select “High” and “Critical” only. Don’t include Medium or Low — the volume will train you to ignore notifications. Notification recipients: Your admin email address and anyone else who should respond to security events. Include organization name: Yes — useful if you manage multiple tenants.

Click “Save.” From now on, every High or Critical incident generates an email within minutes of detection. You no longer need to remember to check the portal — the portal emails you when something matters.

Test the notification: generate a test incident by navigating to Settings → Microsoft Defender XDR → Email notifications → Send a test email. Verify you receive it. If you don’t, check your email filtering rules — some organisations route Microsoft security notifications to a quarantine folder, which defeats the purpose entirely.

Over time, your classification history becomes valuable data. If you’re closing 30 false positives per month for the same alert type, that alert needs tuning. If you’re seeing a pattern of true positives from the same attack vector, the underlying control needs strengthening. The Defender portal tracks these statistics and surfaces them in the threat analytics reports.

Compliance Myth: "We need a 24/7 SOC to monitor these alerts"
A 24/7 SOC is ideal but not required. For a 200-user M365 tenant, the alert volume is manageable with a once-daily check of the incident queue for high/critical alerts and a weekly review of medium/low alerts. The majority of high-severity incidents also generate email notifications to your admin account — you don't need to watch the portal continuously. What you need is a response procedure for when a high-severity alert fires, and the discipline to check the queue at the same time each day. That's 5 minutes of daily effort, not a 24/7 operation.

The response actions available to you from the portal

When you confirm an incident is real, the Defender portal gives you response actions directly from the incident page — you don’t need to navigate to other portals for the most common containment steps.

For a compromised user account, click on the affected user entity in the incident, then select “Manage user.” From here you can disable the account (blocks all sign-ins), reset the password (forces re-authentication), and revoke sessions (terminates all active sessions including tokens). For most identity compromises, you’ll do all three: revoke sessions first (stops active attacker access immediately), then reset the password (prevents re-authentication), then investigate whether the account needs to stay disabled or can be re-enabled with new credentials.

For a compromised device, click on the affected device entity and select “Manage device.” You can isolate the device (cuts network access while maintaining the MDE cloud channel for investigation), run an antivirus scan remotely, collect an investigation package (memory dump, event logs, running processes), or restrict app execution (blocks any non-Microsoft-signed binary from running). Isolation is the most common action — it stops the attacker from using the device for lateral movement while preserving evidence for investigation.

For a malicious email, click on the email evidence in the incident. You can soft delete the email from the recipient’s mailbox, hard delete it, or move it to the Junk folder. If the same phishing email was sent to multiple users, you can use the “Take actions” option to remediate across all affected mailboxes simultaneously.

These actions are available from the Defender portal without requiring PowerShell, without navigating to other admin centers, and without needing a separate response tool. They’re designed for exactly the scenario you’re in: an IT administrator who needs to stop an active threat quickly. Module AD1.9 walks through the complete 15-minute response procedure using these actions.

Decision point

You open the Defender portal on Monday morning and see a high-severity incident: “Compromised user account — j.morrison@northgateeng.com.” The incident was created at 02:30 on Saturday. It’s now 09:00 Monday. The incident summary shows a sign-in from an unusual IP address in a different country, followed by inbox rule creation and email forwarding to an external address.

Option A: Investigate thoroughly before taking any action — review all sign-in logs, check every email the attacker sent, and build a complete timeline before resetting the password.

Option B: Immediately reset the password, revoke all sessions, and remove the inbox rule. Then investigate.

Option C: Escalate to the managed SOC immediately without touching anything.

The correct answer is Option B. The incident is 54 hours old. The attacker has had the entire weekend in the mailbox. Every minute you spend investigating before containment is another minute the attacker has access. Contain first: reset the password (blocks new sign-ins), revoke sessions (terminates active sessions), remove the inbox rule (stops ongoing forwarding). Then investigate what the attacker accessed during the 54-hour window. Speed of containment matters more than completeness of investigation.

Try it: Review your incident queue

Navigate to security.microsoft.com → Incidents & alerts → Incidents. Filter by severity: High and Critical. Set the time range to the last 30 days.

If you see incidents, click on the most recent one and read the Summary tab. Note: the severity, the affected user(s), the detection source, and the investigation state. Is this a real incident or a false positive? If it’s already resolved, who resolved it and how?

If you see no high/critical incidents in 30 days, that’s normal for a small tenant with basic protections. Switch the filter to Medium and review one or two — this is your practice material for learning to read the incident format before a real high-severity alert arrives.

Record the total number of incidents across all severity levels for the past 30 days. This is your alert volume baseline.

The Defender portal shows a medium-severity alert: "Suspicious inbox rule created" for a user account. The rule forwards all emails containing "invoice" or "payment" to an external email address. What is this most likely?
A legitimate rule the user created to forward invoices to their personal accountant — Possible but unlikely. Forwarding emails containing financial keywords to an external address is the signature of a BEC attack. Even if the user claims they created it, verify when the rule was created and whether it matches the user's normal behaviour. Check the sign-in log for the time the rule was created — was it the user's normal device and location?
A likely indicator of business email compromise — the attacker is monitoring financial communications — Correct. This is one of the most common BEC indicators. The attacker compromises the account, creates a rule to forward financial emails to their external address, and monitors the forwarded emails for payment opportunities. The rule runs silently — the user doesn't see it unless they specifically check their inbox rules. Treat this as a confirmed compromise until proven otherwise.
A false positive triggered by Microsoft's overly sensitive detection — Unlikely. Inbox rule creation alerts have a low false positive rate because the detection is specific: a rule was created that forwards email to an external address with financial keyword triggers. This is a high-fidelity detection.
An informational alert that doesn't require action — No. This alert is medium severity because the detection is specific and the impact is high. Ignoring it risks financial fraud. Investigate immediately.

You're reading the free modules of M365 Security: From Admin to Defender

The full course continues with advanced topics, production detection rules, worked investigation scenarios, and deployable artifacts. Premium subscribers get access to all courses.

View Pricing See Full Syllabus