In this module

AD5.1 Why Monitoring Is the Fifth Priority

5-6 hours · Module 5 · Free
Operational Objective
Modules AD1-AD4 deployed four security layers: identity (MFA + conditional access), email (Safe Links + Safe Attachments + anti-phishing), devices (compliance + CA enforcement), and data (sensitivity labels + DLP). Each layer generates security signals — sign-in blocks, phishing detections, compliance failures, DLP matches. But these signals are only valuable if someone looks at them. An MFA block that nobody reviews is a missed opportunity to investigate whether a user's credentials were compromised. A DLP match that nobody checks is a missed chance to prevent data loss. A device compliance failure that nobody follows up on is a gap that persists until the user happens to notice. Monitoring is what transforms deployed controls into an operational security program — it's the difference between "we have security controls" and "we know our security controls are working."
Deliverable: Understanding of why monitoring is essential, what signals your four deployed layers generate, and the monitoring cadence you'll build in this module.
Estimated completion: 20 minutes
DEPLOYED CONTROLS → MONITORING → SECURITY PROGRAMME YOUR 4 DEPLOYED LAYERS Identity → Sign-in blocks, risk detections Email → Phishing detections, Safe Links blocks Devices → Compliance failures, CA blocks Data → DLP matches, label changes, shares Generating signals 24/7 Without monitoring: signals ignored No one knows if controls work or fail Incidents discovered by accident MONITORING (THIS MODULE) 15-min Monday review Defender incident queue triage Alert classification (TP/FP/BTP) Secure Score health check Sign-in log anomaly review DLP + compliance monitoring Alert notifications configured 30-45 min/week total SECURITY PROGRAMME Controls deployed AND monitored Issues caught before they escalate Incidents discovered in minutes, not days Quarterly metrics track improvement Management has visibility Continuous improvement cycle Measurable, sustainable Evidence-based security

Figure AD5.1 — The transition from deployed controls to security program. Without monitoring (left), controls work but nobody knows it — incidents are discovered by accident. With monitoring (centre), signals are reviewed weekly, issues are caught early, and the program is sustainable at 30-45 minutes per week. The result (right) is an evidence-based security program with measurable outcomes.

What your four layers generate

Each security layer you deployed produces specific signals that monitoring catches:

Identity (AD1). The sign-in log records every authentication event. Risky sign-ins (unfamiliar location, impossible travel, anonymous IP) generate Entra ID Protection alerts. Conditional access blocks appear as failed sign-ins with specific error codes. MFA registration changes are logged in the audit log. A weekly review of sign-in anomalies catches compromised accounts that passed MFA — AiTM attacks, MFA fatigue attacks, and token theft that your identity controls should have blocked but might have missed.

Email (AD2). The Defender portal shows email threats detected: phishing blocked by Safe Links, malware caught by Safe Attachments, impersonation attempts flagged by anti-phishing. The user-reported phishing queue shows what users flagged. The quarantine holds emails that were blocked. A weekly review confirms that email protection is working and catches campaigns targeting your organization.

Devices (AD3). The Intune compliance dashboard shows the current compliance rate and devices falling out of compliance. The conditional access sign-in log shows devices blocked by CA003. New devices enrolled but not yet compliant appear in the monitoring queue. A weekly check confirms that device health is maintained and catches compliance drift.

Data (AD4). The DLP Activity Explorer shows sensitive data matches and overrides. The label adoption dashboard shows classification coverage. The SharePoint sharing audit shows external access. A weekly review catches data protection incidents (bulk sensitive data shared externally) and policy tuning needs (false positive patterns).

The monitoring problem for IT administrators

You're not a SOC analyst. You don't have a dedicated security operations centre with 24/7 monitoring, SIEM dashboards, and escalation procedures. You're an IT administrator who is also responsible for security — alongside device management, user support, application maintenance, and everything else that keeps the M365 environment running.

The monitoring cadence for this role is not "watch the dashboard all day." It's a structured weekly review that takes 15-30 minutes on Monday morning — checking each layer's signals, investigating anything that looks wrong, and recording the results for your quarterly report. This is the realistic monitoring commitment: regular enough to catch issues early, structured enough to be repeatable, and short enough to be sustainable alongside your other responsibilities.

The Defender portal (security.microsoft.com) is your single starting point. The unified incident queue consolidates alerts from all Microsoft security services — Defender for Office 365, Entra ID Protection, and DLP. You don't need to check 5 different portals. You check one queue, review the incidents, classify them, and close them. The Secure Score dashboard validates that your controls are still configured correctly. The sign-in log catches authentication anomalies. Three portals, three checks, 15 minutes.

What happens without monitoring

Two real-world scenarios that monitoring would have caught:

Scenario 1: Silent credential compromise. An attacker phishes a user's credentials via AiTM. The session token is captured. MFA shows "satisfied by claim" — the attacker has a valid token. Conditional access blocks the token replay because the attacker's device isn't compliant (AD3 working). The blocked sign-in appears in the sign-in log as a CA003 failure from an unfamiliar IP. Without monitoring, nobody sees this entry. With monitoring, the Monday review catches it: "Why was there a blocked sign-in from an IP in Eastern Europe for user j.morrison at 02:00 on Thursday?" Investigation reveals the AiTM attack, the user's password is reset, and the incident is contained before the attacker tries a different approach.

Scenario 2: Gradual compliance drift. Over 3 weeks, 8 devices fall out of compliance because users defer Windows Updates past the deadline. The compliance rate drops from 97% to 93%. Without monitoring, nobody notices until a user is blocked from M365 and calls the helpdesk, creating an urgent issue. With monitoring, the weekly compliance check catches the drift at the 2-device stage, and a reminder email to the 2 affected users resolves the issue before it becomes 8 users and a helpdesk escalation.

Monitoring doesn't require advanced tools or deep security expertise. It requires a calendar appointment, a checklist, and 15 minutes of attention every Monday. This module builds that checklist.

The dwell time problem

Dwell time is the period between initial compromise and detection. Industry research consistently shows that organizations without structured monitoring have median dwell times of 10-16 days for incidents detected internally. With a weekly Monday review, your maximum dwell time is 7 days — the period between reviews. With alert notifications for high-severity events (AD5.6), your dwell time for critical incidents drops to hours.

Consider the practical impact: an attacker who compromises a user's account on Tuesday afternoon and isn't detected until the following Monday has 5 days to read emails, create forwarding rules, download sensitive documents, and plan further attacks. The same attacker detected on Tuesday evening through an alert notification has hours. The difference in damage potential between 5 days and 4 hours is enormous.

Your monitoring cadence directly determines your dwell time — the faster you review signals, the sooner you detect compromises, and the less damage an attacker can do. The Monday review reduces dwell time from "whenever someone happens to notice" (potentially weeks or months) to a maximum of 7 days, with critical events detected in minutes through notifications.

Quantifying what your controls catch

Before starting the monitoring cadence, run a baseline assessment to understand the volume of security signals your environment generates. This gives you the "normal" reference point for your Monday reviews:

Connect-MgGraph -Scopes "AuditLog.Read.All","SecurityEvents.Read.All"
$monthAgo = (Get-Date).AddDays(-30).ToString("yyyy-MM-ddTHH:mm:ssZ")

# Total sign-in events
$totalSignIns = (Get-MgAuditLogSignIn -Filter "createdDateTime ge $monthAgo" -Top 5000).Count
Write-Host "Total sign-ins (30d): $totalSignIns"

# Risky sign-ins
$risky = (Get-MgAuditLogSignIn -Filter "createdDateTime ge $monthAgo and riskLevelDuringSignIn ne 'none' and riskLevelDuringSignIn ne 'hidden'" -Top 500).Count
Write-Host "Risky sign-ins (30d): $risky"

# CA failures
$caFail = (Get-MgAuditLogSignIn -Filter "createdDateTime ge $monthAgo and conditionalAccessStatus eq 'failure'" -Top 500).Count
Write-Host "CA failures (30d): $caFail"

# Percentage blocked
Write-Host "Block rate: $([math]::Round(($caFail / $totalSignIns) * 100, 2))%"

This baseline tells you: "In a normal month, NE has X sign-ins, Y are flagged as risky, and Z are blocked by conditional access." Your Monday review compares the week's numbers against this baseline. A week with 10x the normal CA failure rate is abnormal — investigate.

The baseline also serves as evidence for management: "In the last 30 days, our conditional access policies blocked X attempted sign-ins from non-compliant devices and Y sign-ins flagged as risky by Microsoft's detection engine. Without these controls (Modules AD1-AD3), these sign-ins would have succeeded." This translates monitoring data into business justification — every blocked sign-in is a potential incident prevented.

Compliance Myth: "Our managed SOC partner monitors everything — we don't need to do our own monitoring"
A managed SOC (like BlueVoyant for NE) monitors specific threat alerts — typically Defender for Endpoint alerts, phishing detections, and identity threat signals. They don't monitor your Intune compliance dashboard, your DLP policy matches, your SharePoint sharing controls, your Secure Score, or your label adoption metrics. These are YOUR security controls with YOUR data — the managed SOC doesn't have context about your compliance policies, your DLP thresholds, or your label taxonomy. You need to monitor the controls you built (AD1-AD4) because only you understand the expected state and can identify when something drifts. The managed SOC monitors the threat landscape. You monitor the security posture.
Decision point

You've deployed all four security layers over 8 weeks. Your manager asks: "Do we need to hire a security analyst to monitor this?" How do you respond?

Option A: Yes — continuous monitoring requires a dedicated role.

Option B: Not at our size. The monitoring cadence for a 200-user M365 environment takes 30-45 minutes per week: a 15-minute Monday review plus monthly and quarterly checks. This is sustainable as part of the IT administrator role. If the organization grows past 500 users or if we deploy advanced tools like Sentinel, a dedicated analyst becomes justified. At our current scale, the structured weekly review covers everything.

The correct answer is Option B. The monitoring workload for a 200-user tenant with E3 controls is well within the capacity of an IT administrator. The structured review is designed for exactly this scenario — a non-security-specialist who needs a repeatable process to check that the security program is healthy. The quarterly report demonstrates the outcomes, and the weekly review catches the issues. A dedicated analyst would be underutilized at this scale.

Try it: Inventory your current monitoring state

Answer these questions honestly about your current monitoring:

1. How often do you check the Defender portal incident queue? (Daily / Weekly / Monthly / Never) 2. How often do you review the sign-in log for anomalies? (Daily / Weekly / Monthly / Never) 3. Do you receive email notifications for high-severity alerts? (Yes / No / Don't know) 4. When did you last check your Secure Score? (This week / This month / Never) 5. Do you review DLP Activity Explorer regularly? (Weekly / Monthly / Never)

If any answer is "Never" or "Don't know," this module addresses it. The goal is for every answer to be "Weekly" or "Yes" by the end of the module — with a structured process that takes 15-30 minutes per week.

Your Defender portal shows an incident from last Tuesday: "Suspicious sign-in activity" for user a.patel. The sign-in was blocked by conditional access (CA003 — non-compliant device). Nobody reviewed the incident — it's still "Active" in the queue. It's now Monday morning. What should you do?
Close it — CA003 blocked the access, so no harm was done — The block prevented access but the sign-in attempt itself is suspicious. Why was someone trying to access a.patel's account from a non-compliant device? Was it a.patel from a personal device, or an attacker with a.patel's credentials?
Ignore it — it's 5 days old, so if it were serious, something else would have happened by now — Attackers are patient. A blocked attempt may be followed by a different approach. The delay between the block and your review is exactly the gap monitoring is designed to close.
Investigate: check the sign-in detail (IP, location, device), check whether a.patel was traveling or using a personal device, check for other suspicious sign-ins for a.patel around the same time. If the IP is unfamiliar and a.patel wasn't traveling, treat it as a credential compromise — reset the password and check MFA methods — Correct. A blocked sign-in from an unfamiliar source is an indicator of credential compromise. CA003 prevented access, but the attacker still has the credentials. Without a password reset, they'll try again from a different angle. The Monday review is when you catch this — 5 days is longer than ideal, but far better than never.
Forward it to the managed SOC partner — If you have a managed SOC, they may already have a parallel alert. But the sign-in log context (was the user traveling?) is information only you have. Investigate first, then escalate if confirmed.

You're reading the free modules of M365 Security: From Admin to Defender

The full course continues with advanced topics, production detection rules, worked investigation scenarios, and deployable artifacts.

View Pricing See Full Syllabus