In this module

AD5.9 Integrating All Monitoring into One Cadence

5-6 hours · Module 5 · Free
Operational Objective
You now have monitoring procedures for five areas: the Defender incident queue (AD5.3), the sign-in log (AD5.8), Secure Score (AD5.7), email threats (AD5.2), and device compliance + DLP (AD5.2). Each check uses a different portal, different filters, and different expected outcomes. Without a structured cadence that integrates them into one workflow, the checks compete for attention and the least urgent ones get skipped. This subsection consolidates all monitoring into a single weekly/monthly/quarterly cadence — one workflow, one log, one source of truth for your security posture.
Deliverable: A consolidated monitoring cadence document covering weekly, monthly, and quarterly activities — with time estimates, portal navigation, and the log format that feeds your quarterly management report.
Estimated completion: 20 minutes
INTEGRATED MONITORING CADENCE — ALL LAYERS EVERY MONDAY (15 min) 1. Incident queue — Active High/Med (3 min) 2. Sign-in anomalies — risky + CA fail (3 min) 3. Secure Score — stable? (2 min) 4. Email threats — delivered phishing? (3 min) 5. Device + DLP — compliance + matches (4 min) + 15 min investigation if flagged CORE HABIT — never skip FIRST OF MONTH (30 min) 1. Run compliance report PowerShell (5 min) 2. Run data protection metrics script (5 min) 3. Check label adoption dashboard (5 min) 4. Review DLP overrides for month (5 min) 5. Record all metrics in spreadsheet (10 min) Feeds quarterly report DATA COLLECTION — automated FIRST OF QUARTER (60 min) 1. Compile quarterly report (20 min) 2. Review exception registers (10 min) 3. SharePoint sharing audit (10 min) 4. Review Secure Score improvements (10 min) 5. Present to management (10 min) Strategic review + reporting PROGRAMME MANAGEMENT

Figure AD5.9 — The integrated monitoring cadence. Weekly (15 min Monday): the 5-check review. Monthly (30 min): metric collection via PowerShell scripts. Quarterly (60 min): report compilation, exception review, sharing audit, and management presentation. Total annual time: ~20 hours for a complete, evidence-based security program.

The weekly security log

Create a simple spreadsheet or document that captures your Monday review results. One row per week, columns for each check:

DateIncidents (Active)Sign-In FlagsSecure ScoreDelivered PhishingCompliance %DLP MatchesNotes
14 Apr0067%097%2Normal week
21 Apr1 (Med - FP)1 (travel)67%096%3Classified FP: user travel
28 Apr0065%197%1Score drop: investigated, CA001 temp disabled, re-enabled

This log takes 2 minutes to fill in after your review. After 13 weeks, you have a quarter's worth of data that shows: security posture is stable (consistent scores), monitoring is active (every week has an entry), and issues are caught and resolved (notes column documents actions taken).

The log is both your operational record and your compliance evidence. If an auditor asks "how do you monitor security?", the log demonstrates weekly, structured monitoring with documented outcomes. If your manager asks "what did you find this quarter?", the log provides 13 data points across 7 metrics.

The monthly metric collection

On the first business day of each month, run the monitoring scripts you built throughout the course:

# 1. Compliance report (from AD3.9)
.\Get-ComplianceReport.ps1

# 2. Data protection metrics (from AD4.10)
.\Get-DataProtectionMetrics.ps1

# 3. Secure Score
Connect-MgGraph -Scopes "SecurityEvents.Read.All"
$score = Get-MgSecuritySecureScore -Top 1
Write-Host "Secure Score: $($score.CurrentScore) / $($score.MaxScore)"

# 4. Sign-in risk summary
$weekAgo = (Get-Date).AddDays(-30).ToString("yyyy-MM-ddTHH:mm:ssZ")
$riskyCount = (Get-MgAuditLogSignIn -Filter "createdDateTime ge $weekAgo and riskLevelDuringSignIn ne 'none' and riskLevelDuringSignIn ne 'hidden'" -Top 500).Count
Write-Host "Risky sign-ins (30d): $riskyCount"

Record the output in your metrics spreadsheet. Three months of monthly data produces the quarterly report automatically — you're just summarising numbers you've already collected.

The total time commitment

Add it up:

ActivityFrequencyTimeAnnual Total
Monday security reviewWeekly (52/year)15 min13 hours
Investigation (when flagged)~10/year30 min average5 hours
Monthly metric collectionMonthly (12/year)30 min6 hours
Quarterly report + reviewQuarterly (4/year)60 min4 hours
Total~28 hours/year

28 hours per year — roughly 30-45 minutes per week averaged — for a complete, evidence-based security monitoring program across four security layers. This is the time commitment you communicate to your manager. It's sustainable, it's structured, and it produces measurable outcomes documented in the quarterly report.

Compliance Myth: "Security monitoring requires a SIEM and a dedicated team"
A SIEM (like Sentinel) and a dedicated SOC team are valuable for large enterprises with complex environments, multiple security tools, and high alert volumes. For a 200-user M365 tenant on E3, the built-in Defender portal, Entra ID sign-in logs, Intune compliance dashboard, and Purview DLP Activity Explorer provide all the monitoring data you need. The structured weekly review processes this data in 15 minutes. A SIEM adds value when you need to correlate across non-Microsoft sources, write custom detection rules, or process thousands of alerts per day. At the scale of most Admin to Defender learners (100-500 users, E3 licensing), the built-in tools are sufficient and the monitoring cadence in this module is the right approach.
Decision point

Your manager sees the 28 hours/year estimate and asks: "Can we automate this away entirely? Why do we need a person checking dashboards?" How do you explain the value of human review?

Option A: We could automate it with Logic Apps and Power Automate — the checks could run automatically and only alert on anomalies.

Option B: Some checks can be supplemented with automation (alert notifications already handle high-severity events). But the Monday review adds human judgment that automation can't replace: understanding whether a "suspicious" sign-in is actually travel, whether a DLP override is legitimate, whether a Secure Score drop is a temporary troubleshooting change or a permanent configuration error. Automation catches the obvious. Human review catches the nuanced. 15 minutes per week of human judgment is the most cost-effective security investment in the program.

The correct answer is Option B. Alert notifications (AD5.6) are the automation layer — they push critical events to you immediately. The Monday review is the judgment layer — you assess the events that automation flagged as "possibly suspicious" and make the TP/FP/BTP decision. Removing the human review means relying entirely on automated classification, which has a false positive rate that requires human assessment. 15 minutes of human judgment per week is not a cost to eliminate — it's the practice that makes the entire security program operational.

# Monday-Security-Review.ps1
# Run every Monday at 09:00 — produces the weekly security summary

Write-Host "========================================" -ForegroundColor Cyan
Write-Host "  MONDAY SECURITY REVIEW — $(Get-Date -Format 'dd MMM yyyy')" -ForegroundColor Cyan
Write-Host "========================================" -ForegroundColor Cyan

Connect-MgGraph -Scopes "AuditLog.Read.All","SecurityEvents.Read.All","DeviceManagementManagedDevices.Read.All" -NoWelcome

$weekAgo = (Get-Date).AddDays(-7).ToString("yyyy-MM-ddTHH:mm:ssZ")

# CHECK 1: Risky sign-ins
Write-Host "`n--- CHECK 1: SIGN-IN ANOMALIES ---" -ForegroundColor Yellow
$risky = Get-MgAuditLogSignIn -Filter "createdDateTime ge $weekAgo and riskLevelDuringSignIn ne 'none' and riskLevelDuringSignIn ne 'hidden'" -Top 100
$caFail = Get-MgAuditLogSignIn -Filter "createdDateTime ge $weekAgo and conditionalAccessStatus eq 'failure'" -Top 100
Write-Host "Risky sign-ins: $($risky.Count)"
Write-Host "CA failures: $($caFail.Count)"
if ($risky.Count -gt 0) {
    $risky | Select-Object CreatedDateTime, UserPrincipalName, RiskLevelDuringSignIn,
        @{N="IP";E={$_.IpAddress}} | Format-Table -AutoSize
}

# CHECK 2: Secure Score
Write-Host "--- CHECK 2: SECURE SCORE ---" -ForegroundColor Yellow
$score = Get-MgSecuritySecureScore -Top 1
$pct = [math]::Round($score.CurrentScore / $score.MaxScore * 100, 1)
Write-Host "Current: $($score.CurrentScore)/$($score.MaxScore) ($pct%)"

# CHECK 3: Device compliance
Write-Host "`n--- CHECK 3: DEVICE COMPLIANCE ---" -ForegroundColor Yellow
$devices = Get-MgDeviceManagementManagedDevice -All
$compliant = ($devices | Where-Object { $_.ComplianceState -eq "compliant" }).Count
$total = $devices.Count
$rate = [math]::Round($compliant / $total * 100, 1)
Write-Host "Compliance rate: $compliant/$total ($rate%)"
$nonCompliant = $devices | Where-Object { $_.ComplianceState -eq "noncompliant" }
if ($nonCompliant.Count -gt 0) {
    Write-Host "Non-compliant devices:" -ForegroundColor Red
    $nonCompliant | Select-Object DeviceName, UserPrincipalName | Format-Table
}

Write-Host "`n--- REVIEW COMPLETE ---" -ForegroundColor Green
Write-Host "Check Defender incident queue manually: security.microsoft.com"
Write-Host "Check email threats manually: security.microsoft.com → Reports"
Write-Host "Check DLP matches manually: purview.microsoft.com → DLP → Activity explorer"
# Append this to the end of Monday-Security-Review.ps1
$logEntry = [PSCustomObject]@{
    Date = (Get-Date -Format "yyyy-MM-dd")
    Incidents = "Check portal"
    RiskySignIns = $risky.Count
    CAFailures = $caFail.Count
    SecureScore = "$($score.CurrentScore)/$($score.MaxScore)"
    ComplianceRate = "$rate%"
    DLPMatches = "Check portal"
    Notes = ""
}
$logEntry | Export-Csv -Path "C:\SecurityScripts\WeeklySecurityLog.csv" -Append -NoTypeInformation
Write-Host "`nLog entry added to WeeklySecurityLog.csv" -ForegroundColor Green
Try it: Build your monitoring calendar and log

1. Create the weekly security log. Open Excel or SharePoint. Create the table from this subsection with 7 columns. Fill in this week's data from your Monday review.

2. Set calendar appointments: - Every Monday 09:00: "Security Review" (30 min) — 5-check review + investigation time - First business day each month: "Monthly Security Metrics" (30 min) — run PowerShell scripts - First business day each quarter: "Quarterly Security Report" (60 min) — compile report + exception review

3. Save your PowerShell scripts. Create a folder C:\SecurityScripts\ containing: Get-ComplianceReport.ps1 (AD3.9), Get-DataProtectionMetrics.ps1 (AD4.10), and the sign-in log review script (AD5.8). Each Monday, you open PowerShell, run the scripts, and record the results.

4. Bookmark your portal URLs. Bookmark the filtered Defender incident queue, Entra sign-in logs, and Secure Score page. Three bookmarks, three clicks, three checks.

The entire monitoring infrastructure takes 30 minutes to set up. After that, it runs on a calendar and a checklist — requiring no additional setup, no new tools, and no additional licensing.

Consolidating all scripts into one Monday runner

Instead of running individual scripts for each check, create a single Monday runner script that executes all checks and produces a summary:

Save this as C:\SecurityScripts\Monday-Security-Review.ps1. Run it every Monday. The automated checks (sign-ins, Secure Score, compliance) take 30 seconds. The manual checks (incident queue, email reports, DLP) require portal navigation — the script reminds you to do them.

Over time, you can extend this script to check DLP matches (requires Exchange Online PowerShell connection), email threat counts (requires Security & Compliance PowerShell), and even generate the weekly log entry automatically. Start simple, add capability as needed.

Automating the weekly log entry

After the Monday runner script produces its output, pipe the key metrics into a CSV that builds your log automatically:

The "Check portal" entries remind you to fill in the manual checks (incident queue and DLP) after completing the portal review. Open the CSV at the end of the month to copy the data into your quarterly report. After 13 weeks, the CSV contains your complete quarterly dataset — no retrospective data gathering needed.

When to consider upgrading monitoring tools

The monitoring cadence in this module works on E3 with built-in tools. But as your environment grows or your security maturity increases, you may need to consider upgrading:

200-500 users, E3: The Monday review + alert notifications are sufficient. This is where most Admin to Defender learners operate. Estimated weekly time: 15-30 minutes.

500-1,000 users, E3/E5 mix: Consider adding Microsoft Sentinel (SIEM) for log aggregation and custom detection rules. The volume of sign-in events and alerts may exceed what manual review can handle effectively. Sentinel provides automated correlation, custom analytics rules, and workbook dashboards that scale the monitoring without proportionally increasing human time.

1,000+ users, E5: Dedicated security analyst recommended. The alert volume, incident complexity, and regulatory obligations at this scale justify a full-time role. The monitoring cadence from this module becomes the analyst's daily procedure rather than the IT admin's weekly supplement.

Handling monitoring during holidays and absences

When you're on holiday, your Monday review doesn't happen — unless you've prepared a handover. Create a one-page monitoring handover document for whoever covers your security responsibilities:

"Security Monitoring Handover" - Monday review: Follow the checklist at [bookmark URL]. The 5 checks take 15 minutes. Record results in the weekly log at [SharePoint/Excel URL]. - Alert notifications: High-severity alerts go to [your email — set up forwarding to the covering person]. Medium severity arrives as daily digest. If a High alert arrives, follow the escalation contacts sheet at [location]. - Managed SOC: [BlueVoyant] handles after-hours High-severity alerts. Escalation email: [email]. No action needed from you unless they call. - Known patterns: [CEO] travels frequently — impossible travel alerts from her are usually FP. Confirm via calendar before classifying. [MarketingAutomation] service principal signs in daily at 06:00 — this is expected. - If something looks wrong and you're unsure: Contact me at [personal phone]. I'd rather be interrupted on holiday than miss a breach.

This handover takes 15 minutes to prepare. Without it, monitoring stops when you're away — creating a 1-2 week gap where incidents accumulate unnoticed.

Your Monday review log shows 13 consecutive weeks of "Normal week" with zero incidents, zero sign-in flags, stable Secure Score, and zero delivered phishing. Is this a sign that monitoring is unnecessary?
Yes — 13 clean weeks proves the controls are working and monitoring is overhead — 13 clean weeks prove the controls ARE working. They don't prove they'll work next week.
Probably — reduce to monthly reviews to save time — Monthly reviews create a 30-day gap where issues accumulate unnoticed.
No, but consider extending to biweekly reviews — Biweekly creates a 14-day gap. At 15 minutes per week, the time saving (8 minutes per week) isn't worth the doubled detection gap.
No — 13 clean weeks are evidence for the quarterly report that monitoring is active and the security posture is stable. The 15 minutes per week is an investment in early detection AND in documented security governance. The value is both operational (catching the week that ISN'T clean) and compliance (demonstrating active monitoring to auditors and management) — Correct. Monitoring is valuable when it catches something AND when it confirms that nothing is wrong. The log of 13 clean weeks is evidence of a healthy, monitored environment. The alternative — no monitoring — provides no evidence at all.

You're reading the free modules of M365 Security: From Admin to Defender

The full course continues with advanced topics, production detection rules, worked investigation scenarios, and deployable artifacts.

View Pricing See Full Syllabus