In this module
AD4.9 Monitoring Labels and DLP
Figure AD4.9 — Data protection monitoring at three cadences. Weekly DLP review catches active incidents. Monthly label tracking verifies adoption. Quarterly sharing audits remove stale external access and feed the management report.
Weekly DLP review
Navigate to purview.microsoft.com → Data Loss Prevention → Activity explorer every Monday as part of your security review. Filter by the last 7 days.
Check three things: total DLP matches this week (is it increasing, stable, or decreasing?), high-severity matches (any bulk data exfiltration attempts?), and overrides (review justifications for legitimacy).
For a quick PowerShell summary:
Connect-IPPSSession
# DLP incidents in the last 7 days
$report = Get-DlpDetailReport -StartDate (Get-Date).AddDays(-7) -EndDate (Get-Date)
Write-Host "DLP matches this week: $($report.Count)"
$report | Group-Object PolicyName | Select-Object Name, Count | Format-Table
$report | Group-Object PolicyAction | Select-Object Name, Count | Format-TableThis gives you: total matches, matches by policy, and the action breakdown (blocked, overridden, notified). If "Override" count is climbing week over week, either a routine workflow needs a policy exception (AD4.8) or users are habitually overriding without good reason (investigate).
Monthly label adoption tracking
Navigate to purview.microsoft.com → Data Classification → Overview. The dashboard shows label distribution across your tenant: how many documents have each label, the trend over time, and which labels are most/least used.
Key metrics: total labeled documents (should be growing), percentage of documents with each label (Internal should dominate at 70-80%), and unlabeled documents (should be decreasing toward zero with mandatory labeling enabled).
If the "Internal" label accounts for 95%+ of all labels after 3 months, users may not be reclassifying sensitive documents — they're leaving everything at the default. This isn't a labeling failure, it's a user behavior issue. Address it with targeted communication: "If you work with client contracts, financial reports, or HR data, please change the label from Internal to Confidential. This adds encryption that protects the content if it's accidentally shared."
Quarterly SharePoint sharing audit
Run the external sharing audit from AD4.6 quarterly. Compare the external user count to the previous quarter. If the count is growing, new external shares are being created faster than old ones are being removed. If the count is stable, your sharing controls are working. If the count is declining, stale shares are being cleaned up.
Connect-SPOService -Url https://northgateeng-admin.sharepoint.com
# Count external users per site
Get-SPOSite -Limit All | ForEach-Object {
$ext = (Get-SPOExternalUser -SiteUrl $_.Url -ErrorAction SilentlyContinue).Count
if ($ext -gt 0) { Write-Host "$($_.Url): $ext external users" }
}Remove external users who no longer need access. A vendor whose project ended 6 months ago doesn't need access to the project site today. Each stale external user is a potential access path that no one is monitoring.
Your monthly label adoption report shows that 98% of documents are labeled, but 91% have the "Internal" label and only 3% have "Confidential." You know from experience that at least 15-20% of your organization's documents contain client data, financial information, or HR records that should be labeled "Confidential." What does this tell you?
Option A: The label taxonomy is wrong — "Internal" covers too broad a range.
Option B: The default label is doing its job (100% coverage) but users aren't reclassifying sensitive documents because they don't understand when to use "Confidential." Send a targeted communication to departments that handle sensitive data (finance, HR, legal, client services) with specific examples of documents that should be "Confidential."
The correct answer is Option B. The taxonomy is correct — four labels, clear definitions. The coverage is excellent — 98% labeled. The gap is user awareness: users are leaving sensitive documents at the default "Internal" instead of upgrading to "Confidential." This is a training issue, not a taxonomy issue. Targeted department-specific communication with concrete examples ("client contracts should be Confidential, not Internal") is more effective than global announcements.
Connect-ExchangeOnline
# Sensitivity label changes in the last 30 days
Search-UnifiedAuditLog -StartDate (Get-Date).AddDays(-30) -EndDate (Get-Date) `
-Operations "SensitivityLabelApplied","SensitivityLabelChanged" `
-ResultSize 500 |
Select-Object CreationDate, UserIds, Operations,
@{N="Label";E={($_.AuditData | ConvertFrom-Json).SensitivityLabelEventData.SensitivityLabelId}} |
Group-Object UserIds |
Select-Object Name, Count |
Sort-Object Count -Descending | Format-TableConnect-IPPSSession
# Check DLP rule conditions for tuning
Get-DlpComplianceRule -Policy "DLP — Personal Data Protection" |
Select-Object Name, ContentContainsSensitiveInformation | Format-ListTry it: Run your first data protection monitoring cycle
Complete these three checks now:
1. DLP Activity Explorer: purview.microsoft.com → DLP → Activity explorer → last 7 days. How many matches? Any overrides? 2. Label adoption: purview.microsoft.com → Data Classification → Overview. What's the label distribution? What percentage is each label? 3. External sharing: Run the PowerShell external user audit. How many sites have external users?
Record these numbers as your data protection baseline. Compare them monthly. After 3 months, you have a trend for the quarterly report.
Deep-diving label adoption by department
The overall adoption rate (95%+) tells you the program is working. The per-department breakdown tells you WHERE it's working and where it isn't. Navigate to purview.microsoft.com → Data Classification → Content explorer. This shows labeled content by location (SharePoint site, OneDrive, Exchange). Filter by site to see which departments have the highest Confidential label usage and which leave everything at the default "Internal."
For a more targeted analysis, use the audit log to see who's actively changing labels:
Users who actively change labels (from Internal to Confidential) are your best adopters — they understand the system. Users who never change labels either don't handle sensitive content (fine) or don't understand when to reclassify (needs targeted communication). Identify the second group by cross-referencing: users in Finance, HR, or Legal who handle sensitive content daily but show zero label changes in the audit log are leaving everything at the default.
DLP policy tuning workflow
DLP tuning is ongoing, not one-time. After the initial false positive adjustment (AD4.7), monitor the match quality monthly. Navigate to purview.microsoft.com → DLP → Activity explorer → filter by last 30 days.
Calculate your true positive rate: genuine sensitive data matches divided by total matches. Target: 85%+. If the rate is below 85%, the policy is generating too many false positives — tune the SIT thresholds and confidence levels.
Common tuning adjustments after the first month:
Credit card false positives from long numbers. Equipment serial numbers, order IDs, and tracking numbers sometimes match the credit card pattern. Fix: increase the credit card minimum count from 1 to 3, and add a "high confidence" requirement. High confidence means the pattern match includes additional validators (Luhn checksum, card network prefix) that reduce false hits.
NINO false positives from formatted text. Some document templates contain alphanumeric reference codes that match the NINO format (two letters + six digits + one letter). Fix: add a "proximity" condition — the NINO must appear near keywords like "national insurance," "NI number," "NINO," or "tax reference." This ensures the detected pattern is actually a NINO, not a random code.
Legitimate bulk data workflows. If a specific team regularly sends bulk data externally (payroll to HMRC, payment files to the bank), create a DLP rule exception for that team's mailbox to specific destination domains. This eliminates routine overrides while keeping the policy active for unexpected sharing.
Building the data protection health dashboard
Combine all data protection metrics into a single monthly check that takes 15 minutes:
Check 1 (2 min): Label adoption rate — purview.microsoft.com → Data Classification → Overview → total labeled, distribution by label.
Check 2 (3 min): DLP matches — purview.microsoft.com → DLP → Activity explorer → last 30 days → total matches, overrides, blocks.
Check 3 (5 min): Override review — filter Activity explorer by "Override" → review justifications → flag any suspicious overrides for investigation.
Check 4 (3 min): External sharing — run the SharePoint external user count query → compare to last month → remove stale users.
Check 5 (2 min): Policy health — run Get-DlpCompliancePolicy and Get-LabelPolicy in PowerShell → verify all policies are enabled and in the correct mode.
Log these five check results in a simple spreadsheet with one row per month. The trend across months tells the story for the quarterly report: label adoption stable or growing, DLP matches declining (users learning), overrides stable and justified, external sharing controlled.
Using Content Explorer for deep analysis
Content Explorer (purview.microsoft.com → Data Classification → Content explorer) lets you drill into specific documents that contain sensitive information. Unlike Activity Explorer (which shows DLP policy matches at the point of sharing), Content Explorer shows where sensitive data EXISTS across your tenant — even if nobody has tried to share it yet.
Navigate to Content explorer → Sensitive information types → select "UK National Insurance Number." The explorer shows every document and email in your tenant that contains a NINO: the file name, location (SharePoint site, OneDrive, Exchange), and the number of SIT matches in each document.
This is powerful for identifying data concentrations. If one SharePoint site contains 200 documents with NINOs, that site needs the "Confidential" default library label (AD4.5) and tighter sharing controls (AD4.6). If one user's OneDrive contains 50 documents with credit card numbers, that warrants a conversation — why is bulk credit card data stored in personal OneDrive instead of a managed SharePoint site with appropriate controls?
Content Explorer requires E5 for full functionality (browsing document content). On E3, you can see the document locations and SIT match counts but not preview the actual content. The location and count data alone is valuable for identifying where your sensitive data concentrations are.
Setting up DLP email alerts
By default, DLP incident reports are sent to the admin email address you specified in the policy rule. For faster response to high-severity matches, configure email alerts that arrive immediately instead of waiting for the weekly Activity Explorer review.
In each DLP policy rule, under "Incident reports," configure: send alert to your security mailbox, include the user name, the file or email, and the sensitive information types detected. Set the alert severity to "High" for rules that detect large volumes of sensitive data (10+ SIT matches in a single document or email).
For lower-volume matches (1-3 SIT matches), keep the severity at "Medium" — these are reviewed weekly in the Activity Explorer, not investigated immediately. The goal is to alert on potential bulk data exfiltration while keeping routine policy tips out of your alert inbox.
You can also route DLP alerts to a Microsoft Teams channel or a Logic App for automated processing. For E3, email routing is the simplest approach. Configure a mail rule in your admin mailbox to flag DLP alerts with a specific category so they're easy to find during the weekly review.
Activity Explorer retains DLP event data for 30 days by default. If you need longer retention for compliance or audit purposes, configure the DLP activity to forward to a Log Analytics workspace where you can set custom retention periods (up to 2 years on the free tier). This also enables KQL queries against DLP data — useful for the quarterly security report and for correlating DLP events with sign-in anomalies from Entra ID.
You're reading the free modules of M365 Security: From Admin to Defender
The full course continues with advanced topics, production detection rules, worked investigation scenarios, and deployable artifacts.