3.4 Insider Risk Management: Policies, Indicators, and Risk Signals
Insider Risk Management: Policies, Indicators, and Risk Signals
Domain 3 — Manage Incident Response: "Investigate and remediate threats identified by Microsoft Purview insider risk policies." Understanding how IRM generates alerts is prerequisite knowledge for investigating those alerts in subsection 3.5.
Introduction
DLP detects sensitive data at the point of action — the moment an email is sent, a file is shared, or a document is copied. Insider Risk Management detects patterns of behavior over time. A single file download is not suspicious. Five hundred file downloads in two hours from a user who submitted their resignation last week is a high-confidence data theft indicator. IRM’s value is in the behavioral context that individual events cannot provide.
IRM exists because the most damaging data breaches are caused by insiders — not external attackers. A Verizon DBIR consistently reports that insider threats account for a significant proportion of data breaches, and the damage per incident is typically higher because insiders already have legitimate access to the data they exfiltrate. They do not need to compromise credentials, escalate privileges, or evade network controls. They simply download, copy, or share data they are already authorized to access — but for unauthorized purposes.
As a SOC analyst, you may encounter IRM in two contexts. First, an IRM alert may be the primary investigation trigger — the IRM system detected anomalous behavior and generated an alert. Second, during a security incident investigation (compromised account, endpoint malware), you may need to check IRM for correlated insider risk signals that provide additional context about the user’s behavior before and during the incident.
This subsection teaches you how IRM policies work, what behavioral indicators they monitor, how risk scores are calculated, and the privacy controls that govern IRM data access. Subsection 3.5 covers the investigation and case management workflow.
IRM policy types
IRM provides policy templates designed for specific insider threat scenarios. Each template defines the indicators to monitor, the triggering events that activate monitoring, and the risk score thresholds that generate alerts.
Data theft by departing employees is the most common and most impactful IRM policy. It monitors employees who have submitted their resignation or been notified of termination (the triggering event comes from the HR connector) for data exfiltration indicators in the period between the trigger and their departure date. The logic is straightforward: an employee who downloads 2,000 files the week before their last day is likely taking data with them. The policy monitors file downloads from SharePoint and OneDrive, email forwarding to personal accounts, USB device usage, printing of sensitive documents, and cloud upload activity.
Data leaks monitors for accidental or intentional sharing of sensitive data with external parties. Unlike DLP (which detects specific sensitive data types), data leaks IRM monitors the pattern: a user who has never shared files externally suddenly shares 50 documents in one day. The behavioral anomaly — deviation from the user’s historical baseline — is the signal, not the content of the files.
Security policy violations monitors for users who disable or circumvent security controls. This includes disabling Defender Antivirus, bypassing DLP policy tips using the override function repeatedly, installing prohibited software, and connecting unauthorized devices. These actions may indicate an insider who is preparing the environment for data theft by removing the controls that would detect it.
Triggering events and the HR connector
IRM policies do not monitor everyone all the time. They monitor specific users after a triggering event indicates elevated risk. This is a privacy-by-design feature — IRM does not conduct mass surveillance of all employee activity. It activates monitoring for specific individuals when a risk-relevant event occurs.
The HR connector is the primary triggering event source. When the HR system sends a signal to IRM (via API integration), IRM activates the relevant policy for that user. Common HR signals include resignation submission (activates data theft policy), performance improvement plan (PIP) initiation (activates data leak and security violation policies), termination notification (activates data theft policy with higher sensitivity), and contractor end-date approaching (activates data theft policy).
DLP policy matches can also trigger IRM monitoring. When a user generates a DLP alert, IRM can use that alert as a triggering event to begin monitoring the user’s broader behavioral pattern. This creates a feedback loop: DLP detects a specific data event, IRM begins monitoring the user’s overall behavior, and if the behavioral pattern matches a risk profile, IRM generates its own alert with broader context than the original DLP alert provided.
Security alert triggers connect IRM to the broader security ecosystem. When a user’s account is flagged by Defender for Identity (anomalous authentication), Entra ID Protection (risky sign-in), or Defender for Endpoint (malware detection), IRM can use these signals as triggering events. This is how IRM contributes to compromised account investigations — the security alert triggers IRM monitoring, and IRM provides the data access pattern analysis that tells you what the compromised account accessed.
Behavioral indicators and sequence detection
After a triggering event activates monitoring, IRM tracks behavioral indicators — specific actions that contribute to the user’s risk score. Individual indicators have limited significance. Their power is in combination and sequence.
File activity indicators track downloads from SharePoint and OneDrive (volume, frequency, and whether files are labeled as sensitive), file copies to USB devices, file uploads to personal cloud storage (detected via browser URL monitoring), file printing (volume and whether printed files contain sensitive labels), and file renaming (mass renaming may indicate preparation for bulk exfiltration — changing file names to hide the nature of the content).
Email activity indicators track emails sent to external recipients (volume, recipient domains, attachment volume), email forwarding rules created (particularly rules that forward to personal email addresses), and emails with attachments containing sensitive labels or DLP-matched content.
SharePoint and Teams indicators track sharing permission changes (making content available to broader audiences or external users), site collection downloads (bulk downloading an entire site), and Teams file sharing with external participants.
Endpoint indicators require MDE onboarding (Module 2) and track USB device connections, file copies to removable media, application installations, and browsing activity to cloud storage and file-sharing sites.
Sequence detection is IRM’s most sophisticated capability. Instead of alerting on individual indicators, sequence detection identifies ordered chains of actions that match known data theft patterns. For example: user submits resignation (triggering event) → user connects USB drive to corporate laptop (indicator 1) → user downloads 200 files from SharePoint (indicator 2) → user copies files to USB drive (indicator 3) → user visits personal Dropbox in browser (indicator 4) → user uploads files to Dropbox (indicator 5). Each individual action might be innocent. The sequence, occurring after a resignation trigger, matches the departing employee data theft pattern with high confidence.
IRM's sequence detection is what distinguishes it from simple threshold-based alerting. A user who downloads 500 files because they are preparing a department report is not the same as a user who downloads 500 files, copies them to USB, and uploads them to personal cloud storage the week before their resignation. Both have the same download volume. Only the second has the risk sequence. IRM's algorithms weight sequential patterns higher than individual high-volume events because the sequence indicates intent.
Risk score calculation and escalation
IRM calculates a risk score for each monitored user based on the cumulative weight of detected indicators. The score is not a simple count — it incorporates indicator severity (downloading labeled documents scores higher than downloading unlabeled ones), indicator recency (actions in the last 24 hours score higher than actions from 7 days ago), sequence detection (completed sequences score higher than isolated indicators), and baseline deviation (actions that deviate significantly from the user’s 90-day historical pattern score higher than actions within normal range).
The risk score determines the alert severity. Low risk scores generate Low severity alerts (or no alert at all, depending on the policy threshold). High risk scores generate High severity alerts that indicate a high-confidence insider threat requiring immediate investigation.
| Day | Event | Indicator | Risk Score |
|---|---|---|---|
| March 1 | Resignation submitted | Triggering event — monitoring begins | — |
| March 3 | 50 files downloaded from SharePoint | Above daily average (normal: 5-10) | Low |
| March 5 | USB device connected | First USB use in 90 days | Medium |
| March 5 | 200 files copied to USB | Bulk copy to removable media | High |
| March 6 | Personal Dropbox accessed via browser | Sequence detected: download→USB→cloud | Critical |
How IRM indicators combine: a worked example
To make the risk score calculation concrete, trace how indicators accumulate for a departing employee data theft scenario.
On Day 0, the HR connector sends a resignation signal. IRM activates the departing employee data theft policy. The risk score starts at zero — the triggering event activates monitoring but does not itself generate a risk score.
On Day 2, the user downloads 50 files from SharePoint. Their 90-day daily average is 5 files. The volume indicator fires: “download volume 10x above baseline.” This is a single indicator — the score moves to Low. No alert is generated because a single indicator, even at 10x volume, has legitimate explanations (preparing a handover document, archiving project files before departure).
On Day 3, the user connects a USB device. Their 90-day USB usage history shows zero connections — this is a first-time USB use indicator. The score moves from Low to Medium. Still no alert — USB connection alone is not sufficient for a high-confidence classification.
On Day 3 (30 minutes later), the user copies 200 files from the Downloads folder to the USB drive. The bulk copy to removable media indicator fires. The score escalates from Medium to High because two indicators fired in sequence (USB connection immediately followed by bulk file copy), and both are in the “exfiltration” indicator category. An alert may be generated at High (depending on the policy threshold configuration).
On Day 4, the user opens a browser, navigates to personal Dropbox, and uploads files. The personal cloud storage indicator fires. But more importantly, IRM’s sequence detection engine recognizes the completed chain: SharePoint download → USB copy → personal cloud upload. This is the departing employee data theft sequence. The risk score jumps to Critical. A Critical alert is generated.
The sequence detection is what makes this Critical rather than just a collection of High-severity individual indicators. Each individual action has innocent explanations. The complete sequence, in order, following a resignation trigger, has no innocent explanation.
| |
This KQL query is not how IRM works internally (IRM uses its own ML-based risk engine), but it demonstrates the investigation logic: check whether the three exfiltration indicators all fired within the investigation window. If all three are present, the sequence is complete.
Privacy-by-design architecture
IRM monitors employee behavior, which creates privacy obligations. Microsoft designed IRM with several privacy controls that directly affect how you access and investigate IRM data.
Pseudonymization replaces user names with anonymous identifiers (like “User-1247”) in the IRM dashboard and alert views. Analysts see the risk score, the indicators, and the behavioral timeline — but not the user’s name. Only analysts with the “Insider Risk Management Investigator” role can de-anonymize user identities to proceed with a formal investigation. This two-tier access model ensures that initial alert review (which may involve many false positives) does not expose employee identities unnecessarily.
Scoped access limits which users’ IRM data each analyst can see. The IRM administrator can configure analyst access scopes so that each analyst only sees alerts for specific departments, regions, or user groups. A UK-based analyst sees only UK employee alerts. A Finance risk analyst sees only Finance department alerts. This prevents broad access to sensitive behavioral data.
Audit logging of analyst actions records every action an IRM analyst takes — every alert viewed, every case opened, every user de-anonymized. This audit trail provides accountability and supports regulatory compliance. If an employee challenges an IRM investigation, the audit trail proves that the investigation followed proper procedures.
Data retention limits automatically delete IRM data after the configured retention period (30, 60, 90, or 120 days). Behavioral data that did not result in an alert or case is purged automatically. This limits the accumulation of employee surveillance data beyond what is needed for active risk management.
Try it yourself
In the Purview portal, navigate to Insider Risk Management → Policies. If your lab tenant has E5 licensing, explore the available policy templates. Note the triggering events available for each template (HR connector, DLP policy match, security alert), the indicator categories, and the risk score thresholds. If your lab does not have E5, the IRM section will show a licensing prompt — review the Microsoft documentation for the policy template descriptions instead. The goal is familiarity with IRM's policy structure before the investigation workflow in subsection 3.5.
What you should observe
The policy templates show the preconfigured scenarios (data theft, data leaks, security violations). Each template has configurable triggering events, indicators, and thresholds. Creating a policy in a lab environment requires an HR connector or an alternative triggering event configuration. For exam preparation, understanding the policy types, the triggering event concept, the indicator categories, and the risk score escalation model is more important than creating a test policy — the exam tests conceptual understanding and investigation workflow, not policy creation steps.
Knowledge check
Check your understanding
1. An IRM policy for departing employees generates a Critical alert for a user who resigned last week. The indicators show: 500 file downloads from SharePoint (10x daily average), USB device connected for the first time in 6 months, and 500 files copied to the USB. What makes this Critical rather than just High?
2. You have Security Administrator access and want to review an insider risk alert. You navigate to IRM in the Purview portal but cannot see any alerts. Why?
3. What is the difference between how DLP and IRM detect data exfiltration?