In this module
AD5.1 Why Monitoring Is the Fifth Priority
Figure AD5.1 — The transition from deployed controls to security program. Without monitoring (left), controls work but nobody knows it — incidents are discovered by accident. With monitoring (centre), signals are reviewed weekly, issues are caught early, and the program is sustainable at 30-45 minutes per week. The result (right) is an evidence-based security program with measurable outcomes.
What your four layers generate
Each security layer you deployed produces specific signals that monitoring catches:
Identity (AD1). The sign-in log records every authentication event. Risky sign-ins (unfamiliar location, impossible travel, anonymous IP) generate Entra ID Protection alerts. Conditional access blocks appear as failed sign-ins with specific error codes. MFA registration changes are logged in the audit log. A weekly review of sign-in anomalies catches compromised accounts that passed MFA — AiTM attacks, MFA fatigue attacks, and token theft that your identity controls should have blocked but might have missed.
Email (AD2). The Defender portal shows email threats detected: phishing blocked by Safe Links, malware caught by Safe Attachments, impersonation attempts flagged by anti-phishing. The user-reported phishing queue shows what users flagged. The quarantine holds emails that were blocked. A weekly review confirms that email protection is working and catches campaigns targeting your organization.
Devices (AD3). The Intune compliance dashboard shows the current compliance rate and devices falling out of compliance. The conditional access sign-in log shows devices blocked by CA003. New devices enrolled but not yet compliant appear in the monitoring queue. A weekly check confirms that device health is maintained and catches compliance drift.
Data (AD4). The DLP Activity Explorer shows sensitive data matches and overrides. The label adoption dashboard shows classification coverage. The SharePoint sharing audit shows external access. A weekly review catches data protection incidents (bulk sensitive data shared externally) and policy tuning needs (false positive patterns).
The monitoring problem for IT administrators
You're not a SOC analyst. You don't have a dedicated security operations centre with 24/7 monitoring, SIEM dashboards, and escalation procedures. You're an IT administrator who is also responsible for security — alongside device management, user support, application maintenance, and everything else that keeps the M365 environment running.
The monitoring cadence for this role is not "watch the dashboard all day." It's a structured weekly review that takes 15-30 minutes on Monday morning — checking each layer's signals, investigating anything that looks wrong, and recording the results for your quarterly report. This is the realistic monitoring commitment: regular enough to catch issues early, structured enough to be repeatable, and short enough to be sustainable alongside your other responsibilities.
The Defender portal (security.microsoft.com) is your single starting point. The unified incident queue consolidates alerts from all Microsoft security services — Defender for Office 365, Entra ID Protection, and DLP. You don't need to check 5 different portals. You check one queue, review the incidents, classify them, and close them. The Secure Score dashboard validates that your controls are still configured correctly. The sign-in log catches authentication anomalies. Three portals, three checks, 15 minutes.
What happens without monitoring
Two real-world scenarios that monitoring would have caught:
Scenario 1: Silent credential compromise. An attacker phishes a user's credentials via AiTM. The session token is captured. MFA shows "satisfied by claim" — the attacker has a valid token. Conditional access blocks the token replay because the attacker's device isn't compliant (AD3 working). The blocked sign-in appears in the sign-in log as a CA003 failure from an unfamiliar IP. Without monitoring, nobody sees this entry. With monitoring, the Monday review catches it: "Why was there a blocked sign-in from an IP in Eastern Europe for user j.morrison at 02:00 on Thursday?" Investigation reveals the AiTM attack, the user's password is reset, and the incident is contained before the attacker tries a different approach.
Scenario 2: Gradual compliance drift. Over 3 weeks, 8 devices fall out of compliance because users defer Windows Updates past the deadline. The compliance rate drops from 97% to 93%. Without monitoring, nobody notices until a user is blocked from M365 and calls the helpdesk, creating an urgent issue. With monitoring, the weekly compliance check catches the drift at the 2-device stage, and a reminder email to the 2 affected users resolves the issue before it becomes 8 users and a helpdesk escalation.
Monitoring doesn't require advanced tools or deep security expertise. It requires a calendar appointment, a checklist, and 15 minutes of attention every Monday. This module builds that checklist.
The dwell time problem
Dwell time is the period between initial compromise and detection. Industry research consistently shows that organizations without structured monitoring have median dwell times of 10-16 days for incidents detected internally. With a weekly Monday review, your maximum dwell time is 7 days — the period between reviews. With alert notifications for high-severity events (AD5.6), your dwell time for critical incidents drops to hours.
Consider the practical impact: an attacker who compromises a user's account on Tuesday afternoon and isn't detected until the following Monday has 5 days to read emails, create forwarding rules, download sensitive documents, and plan further attacks. The same attacker detected on Tuesday evening through an alert notification has hours. The difference in damage potential between 5 days and 4 hours is enormous.
Your monitoring cadence directly determines your dwell time — the faster you review signals, the sooner you detect compromises, and the less damage an attacker can do. The Monday review reduces dwell time from "whenever someone happens to notice" (potentially weeks or months) to a maximum of 7 days, with critical events detected in minutes through notifications.
Quantifying what your controls catch
Before starting the monitoring cadence, run a baseline assessment to understand the volume of security signals your environment generates. This gives you the "normal" reference point for your Monday reviews:
Connect-MgGraph -Scopes "AuditLog.Read.All","SecurityEvents.Read.All"
$monthAgo = (Get-Date).AddDays(-30).ToString("yyyy-MM-ddTHH:mm:ssZ")
# Total sign-in events
$totalSignIns = (Get-MgAuditLogSignIn -Filter "createdDateTime ge $monthAgo" -Top 5000).Count
Write-Host "Total sign-ins (30d): $totalSignIns"
# Risky sign-ins
$risky = (Get-MgAuditLogSignIn -Filter "createdDateTime ge $monthAgo and riskLevelDuringSignIn ne 'none' and riskLevelDuringSignIn ne 'hidden'" -Top 500).Count
Write-Host "Risky sign-ins (30d): $risky"
# CA failures
$caFail = (Get-MgAuditLogSignIn -Filter "createdDateTime ge $monthAgo and conditionalAccessStatus eq 'failure'" -Top 500).Count
Write-Host "CA failures (30d): $caFail"
# Percentage blocked
Write-Host "Block rate: $([math]::Round(($caFail / $totalSignIns) * 100, 2))%"This baseline tells you: "In a normal month, NE has X sign-ins, Y are flagged as risky, and Z are blocked by conditional access." Your Monday review compares the week's numbers against this baseline. A week with 10x the normal CA failure rate is abnormal — investigate.
The baseline also serves as evidence for management: "In the last 30 days, our conditional access policies blocked X attempted sign-ins from non-compliant devices and Y sign-ins flagged as risky by Microsoft's detection engine. Without these controls (Modules AD1-AD3), these sign-ins would have succeeded." This translates monitoring data into business justification — every blocked sign-in is a potential incident prevented.
You've deployed all four security layers over 8 weeks. Your manager asks: "Do we need to hire a security analyst to monitor this?" How do you respond?
Option A: Yes — continuous monitoring requires a dedicated role.
Option B: Not at our size. The monitoring cadence for a 200-user M365 environment takes 30-45 minutes per week: a 15-minute Monday review plus monthly and quarterly checks. This is sustainable as part of the IT administrator role. If the organization grows past 500 users or if we deploy advanced tools like Sentinel, a dedicated analyst becomes justified. At our current scale, the structured weekly review covers everything.
The correct answer is Option B. The monitoring workload for a 200-user tenant with E3 controls is well within the capacity of an IT administrator. The structured review is designed for exactly this scenario — a non-security-specialist who needs a repeatable process to check that the security program is healthy. The quarterly report demonstrates the outcomes, and the weekly review catches the issues. A dedicated analyst would be underutilized at this scale.
Try it: Inventory your current monitoring state
Answer these questions honestly about your current monitoring:
1. How often do you check the Defender portal incident queue? (Daily / Weekly / Monthly / Never) 2. How often do you review the sign-in log for anomalies? (Daily / Weekly / Monthly / Never) 3. Do you receive email notifications for high-severity alerts? (Yes / No / Don't know) 4. When did you last check your Secure Score? (This week / This month / Never) 5. Do you review DLP Activity Explorer regularly? (Weekly / Monthly / Never)
If any answer is "Never" or "Don't know," this module addresses it. The goal is for every answer to be "Weekly" or "Yes" by the end of the module — with a structured process that takes 15-30 minutes per week.
You're reading the free modules of M365 Security: From Admin to Defender
The full course continues with advanced topics, production detection rules, worked investigation scenarios, and deployable artifacts.