3.5 Investigating Insider Risk Alerts and Managing Cases

12-16 hours · Module 3

Investigating Insider Risk Alerts and Managing Cases

SC-200 Exam Objective

Domain 3 — Manage Incident Response: "Investigate and remediate threats identified by Microsoft Purview insider risk policies."

Introduction

Subsection 3.4 taught you how IRM policies detect insider threats through behavioral patterns and risk score escalation. This subsection teaches you what happens when an alert fires — the investigation workflow that takes you from an anonymized risk signal to a documented case with evidence, stakeholder coordination, and resolution.

IRM investigation differs from standard security incident response in three fundamental ways. First, the subject of the investigation is an employee, not an external attacker — which introduces employment law, privacy law, and HR policy considerations that do not apply to external threat investigations. Second, the investigation involves pseudonymized data that must be de-anonymized through a controlled process — you cannot simply look up the user’s name. Third, the outcome may be a personnel action (termination, disciplinary action, legal proceedings) rather than a technical remediation (password reset, device isolation) — which requires coordination with HR and legal counsel, not just the SOC team.

These differences do not make IRM investigations more difficult. They make them more procedurally sensitive. This subsection teaches you the procedures.


The IRM investigation workflow

IRM INVESTIGATION WORKFLOW — FROM ALERT TO RESOLUTION① TriagePseudonymizedreview② De-anonymizeAuthorizedidentification③ Deep reviewActivity timeline+ content explorer④ Create caseHR + Legalnotification⑤ InvestigateCollect evidenceeDiscovery if needed⑥ ResolveBenign / disciplinary/ termination / legal
Figure 3.7: The six-step IRM investigation workflow. Unlike standard IR (which moves from detection to containment quickly), IRM investigation includes a mandatory de-anonymization step and HR/legal coordination before the formal investigation phase. This procedural framework exists because the subject is an employee with legal protections.

Step 1: Triage — pseudonymized review. IRM alerts appear in the Purview portal’s Insider Risk Management → Alerts section. By default, the user identity is pseudonymized — you see “User-3847” instead of the employee’s name. Review the alert’s risk score, the indicators that contributed to it, the policy that triggered, and the behavioral timeline. At this stage, you are determining whether the alert warrants investigation, not who the user is.

Most IRM alerts at triage are Low or Medium severity and can be resolved without de-anonymization. A user who downloaded 20% more files than their daily average for one day and then returned to normal does not warrant identification. Document the triage outcome and close the alert as “Benign — within acceptable deviation.”

Step 2: De-anonymization — authorized identification. For alerts that warrant investigation (High or Critical severity, completed sequences, risk scores above the configured threshold), the next step is identifying the user. De-anonymization requires the Insider Risk Management Investigator role (not just the Analyst role). Clicking “Resolve anonymization” reveals the user’s name, department, manager, and employment details. This action is logged in the IRM audit trail — there is a permanent record that you de-anonymized this user at this time for this alert.

De-anonymization is an irreversible, audited action

Once you de-anonymize a user in an IRM alert, the action is logged and cannot be undone. Only de-anonymize when the alert severity and indicator pattern warrant a formal investigation. De-anonymizing users out of curiosity — even if technically possible with the Investigator role — violates the privacy-by-design principles that govern IRM and may violate your organization's employee monitoring policies. Each de-anonymization should be justifiable if audited.

Step 3: Deep review — activity timeline and content explorer. After de-anonymization, you have access to the user’s full activity timeline within IRM. This timeline shows every monitored action chronologically: file downloads with file names and labels, email sends with recipients and attachment details, USB device connections with device identifiers, SharePoint sharing changes, browser activity to cloud storage sites, and printing activity.

The content explorer (available in cases, not just alerts) lets you view the actual content involved — the files that were downloaded, the emails that were sent, the documents that were printed. This is IRM’s forensic evidence layer: you can see not just that the user downloaded 500 files, but which 500 files they downloaded and whether any contained sensitive data.

Step 4: Create case — HR and legal coordination. When the deep review confirms that the user’s behavior warrants formal investigation, create an IRM case. The case is a formal investigation container that tracks the investigation, stores evidence, records analyst actions, and facilitates stakeholder coordination.

Before creating the case, notify HR and legal counsel. IRM investigations that may result in personnel actions (termination, disciplinary action) require HR involvement. Investigations that may result in legal proceedings (civil or criminal action for data theft) require legal counsel involvement. In many organizations, the SOC analyst does not create the IRM case directly — they present the alert findings to a cross-functional review team that includes HR, legal, and security, and the team collectively decides whether to open a formal case.

Step 5: Investigate — collect evidence. With the case created and stakeholders engaged, conduct the formal investigation. Review the complete activity timeline. Identify every data artifact involved (which files, which emails, which documents). Determine whether data left the organization (check DLP alerts, check email delivery status, check browser upload activity). If data did leave, determine where it went and whether it can be recovered or contained.

If the investigation requires locating specific content (the exact files the user downloaded, the exact emails they forwarded), escalate to eDiscovery (subsection 3.8) for content search. eDiscovery preserves the content as evidence and can place it on legal hold to prevent deletion.

If the investigation overlaps with a security incident (the user’s account may also be compromised, or the insider risk behavior may have been triggered by an external attacker who gained access to the account), coordinate with the SOC incident response workflow from Module 1.2. The IRM investigation and the security investigation run in parallel, sharing findings through the cross-functional review team.

Step 6: Resolve — determine outcome and close. The resolution depends on the investigation findings and the cross-functional team’s decision.

Benign — the behavior was legitimate. The user downloaded files for a valid business purpose, the pattern happened to match a risk profile, and no data theft occurred. Close the case with documentation of the finding. No personnel action.

Policy violation — the user’s behavior violated data handling policies but was not malicious. The user forwarded documents to their personal email for convenience, not for theft. Resolution: policy retraining, manager notification, potential written warning depending on HR policy.

Data theft confirmed — the user deliberately exfiltrated organizational data. Resolution depends on the organization’s response policy and legal counsel’s advice: immediate termination, legal action to recover the data, regulatory notification if personal data was involved, and potential criminal referral for severe cases.

Account compromise — the IRM alert was triggered by an attacker operating through a compromised account, not by the employee’s own actions. Transfer the case to the standard security incident response workflow. The employee is a victim, not a subject.


Evidence handling in IRM investigations

IRM evidence requires careful handling because it may be used in employment proceedings, civil litigation, or criminal prosecution. Standard security incident evidence handling (Module 14) applies, with additional requirements specific to employee investigations.

Chain of custody. Every evidence artifact (file download records, email metadata, activity timelines) must have a documented chain of custody showing when it was collected, by whom, from which system, and how it was stored. IRM’s built-in audit logging provides part of this chain, but you should also document your own investigation actions in the case notes.

Legal hold coordination. If legal counsel determines that the investigation may lead to litigation, request an eDiscovery legal hold on the user’s mailbox, OneDrive, and relevant SharePoint sites. This preserves all content from deletion — even if the user attempts to cover their tracks by deleting emails or files. Legal hold is a legal action, not a technical one — it requires authorization from legal counsel, not just the SOC analyst.

Interview coordination. If the investigation leads to an employee interview (typically conducted by HR, not by the SOC analyst), the IRM activity timeline provides the factual basis for the interview questions. Provide HR with a sanitized summary of the findings — specific actions, dates, and data volumes — without sharing the raw IRM interface. HR does not need access to the IRM system. They need the investigation findings presented in a format they can use in the interview.


IRM investigation vs standard security IR: the key differences

IRM investigations and standard security incident response follow similar investigative logic (detect, investigate, contain, remediate) but differ in critical procedural aspects that affect how you conduct the investigation.

IRM Investigation vs Standard IR — Procedural Differences
AspectStandard Security IRIRM Investigation
SubjectExternal attacker (unknown)Employee (known, has legal rights)
Initial access to dataOpen to SOC teamPseudonymized — requires de-anonymization
StakeholdersSOC team, CISOSOC + HR + Legal counsel
ContainmentAccount disable, device isolateCoordinate with HR before any action
OutcomeRemediation + incident reportPersonnel action + potential litigation
Evidence standardInternal documentationMay need to withstand legal scrutiny
Timeline pressureContain ASAPDeliberate — premature action may compromise case
The critical difference: In standard IR, you contain first and investigate second — speed matters because the attacker is active. In IRM, premature containment (disabling the employee's account, confiscating their device) may alert them to delete evidence, may violate employment law, or may compromise a legal case. IRM investigations are deliberately paced, with legal counsel guiding each step.

Correlating IRM alerts with security events

Some IRM alerts overlap with security incidents. A compromised account may trigger both security alerts (anomalous sign-in from Defender) and insider risk alerts (anomalous data access from IRM). Determining whether the user is an insider threat or a victim of account compromise changes the entire investigation direction and response.

Use Sentinel to correlate IRM triggering signals with security event data. If an IRM alert fires for a user whose account also shows anomalous sign-in patterns from an unusual IP (check SigninLogs), the behavior may be attacker-driven rather than employee-driven. If the sign-in activity is normal (corporate IP, normal device, normal hours) but the data access pattern is anomalous, the behavior is more likely employee-driven.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
// Correlate IRM user activity with sign-in anomalies
let irmUser = "j.morrison@northgateeng.com";
let irmWindow = 7d;
SigninLogs
| where TimeGenerated > ago(irmWindow)
| where UserPrincipalName =~ irmUser
| where ResultType == "0"
| extend Country = tostring(LocationDetails.countryOrRegion)
| summarize SigninCount = count(),
    Countries = make_set(Country, 10),
    IPs = dcount(IPAddress),
    RiskySignins = countif(RiskLevelDuringSignIn in ("medium", "high"))
    by bin(TimeGenerated, 1d)
| order by TimeGenerated desc

If RiskySignins > 0 during the same period as the IRM alert, escalate to the security team for parallel investigation. If all sign-ins are from normal locations with zero risk, the behavior is coming from the legitimate user — proceed with the IRM workflow.


Adaptive Protection: automated DLP enforcement from IRM risk scores

Adaptive Protection is the integration point between IRM and DLP. When IRM calculates a user’s risk score as elevated, Adaptive Protection can automatically apply stricter DLP policies to that user — without manual intervention.

For example: a user with a normal risk score is subject to standard DLP policies (warn on external sharing, block only bulk sensitive data). When their IRM risk score escalates to High (they triggered multiple indicators after a resignation event), Adaptive Protection automatically applies a stricter DLP policy: block all external sharing, block USB copy, and alert on any file download above a threshold. This dynamic enforcement tightens data protection specifically for users who are exhibiting risky behavior, without affecting the rest of the organization.

For SOC analysts, Adaptive Protection means that DLP alert volume for high-risk users may increase during an IRM investigation — because the user is subject to stricter policies that generate more alerts. This is expected behavior, not a sudden increase in data exposure. Check the DLP policy name in the alert: if it references “Adaptive” or a risk-level-based policy, the alert was triggered by the IRM-driven policy escalation, not by a new data exposure event.

Try it yourself

If your lab has E5 licensing with IRM configured, navigate to Insider Risk Management → Alerts in the Purview portal. Review any test alerts present (in a lab without the HR connector, you may not have alerts). Explore the alert detail view: the risk score breakdown, the indicator list, and the activity timeline. If no alerts exist, review the Microsoft documentation for IRM alert investigation to familiarize yourself with the interface — the exam tests the workflow concepts, not navigation clicks.

What you should observe

The alert view shows a risk score (numerical value), the contributing indicators (categorized by type), and a chronological activity timeline. If pseudonymization is enabled, user identities appear as anonymous IDs. The escalation path from alert to case creation is visible in the interface controls. Understanding this flow — triage, de-anonymize, review, create case, investigate, resolve — is the key takeaway regardless of whether you have live test data.


Knowledge check

Check your understanding

1. An IRM alert shows a Critical risk score for a user identified as "User-4291" (pseudonymized). The indicators show 500 files downloaded and copied to USB after a resignation trigger. What do you do before de-anonymizing the user?

Verify that the alert severity and indicator pattern genuinely warrant identification. Critical score + completed data theft sequence + resignation trigger = sufficient justification. But before de-anonymizing, confirm you have the Insider Risk Management Investigator role (not just Analyst), confirm the investigation aligns with your organization's IRM procedures (some organizations require management approval before de-anonymization), and document your justification in the alert notes. De-anonymization is audited and must be defensible if reviewed.
De-anonymize immediately — Critical alerts require instant action
Close the alert — pseudonymized data cannot be investigated
Escalate to the Defender XDR portal for full investigation

2. Your IRM investigation confirms that a departing employee copied 2,000 customer records to a personal USB drive. Legal counsel is not yet engaged. What is your next step?

Engage legal counsel immediately before taking any further action. Confirmed data theft of customer records creates potential legal liability: employment law implications (the termination must follow proper process), data protection implications (customer personal data may require breach notification), and potential criminal implications (data theft may be a criminal offence). The SOC analyst's role is to present the evidence to the cross-functional team (security + HR + legal). Legal counsel guides the response actions.
Confront the employee directly and demand return of the data
Disable their account immediately to prevent further access
File a police report for data theft