4.10 Cross-Product Investigation: Defender for Cloud + Sentinel + XDR
Cross-Product Investigation: Defender for Cloud + Sentinel + XDR
Domain 3 — Manage Incident Response: "Investigate complex attacks, such as multi-stage, multi-domain, and lateral movement." This subsection demonstrates cross-product investigation where cloud infrastructure alerts combine with identity and endpoint evidence.
Introduction
Cloud attacks rarely stay in one domain. An attacker who compromises an Azure identity does not just access Azure resources — they may also access M365 data, on-premises systems via VPN, and other cloud environments via federated trust. A VM compromised through an endpoint vulnerability generates alerts in both Defender for Cloud (cloud workload protection) and MDE (endpoint detection). A storage account accessed by a stolen identity triggers both a Defender for Storage alert (anomalous access) and an Entra ID risk event (unusual sign-in).
This subsection builds a complete cross-product investigation timeline for a cloud infrastructure attack — tracing the attacker from identity compromise through privilege escalation, resource deployment, and data exfiltration across Entra ID, Azure Resource Manager, Defender for Cloud, and Sentinel.
The scenario: cloud identity compromise leading to resource abuse
The SOC receives three correlated alerts:
Alert 1 (Entra ID Protection): “Unfamiliar sign-in properties” — the identity admin@northgateeng.com signed in from IP 198.51.100.77 (US) at 03:12 UTC. The user normally signs in from UK IPs during business hours.
Alert 2 (Defender for Cloud): “Suspicious Azure Resource Manager operation” — the same identity created a new VM in Brazil South region at 03:16 UTC and modified a network security group at 03:18 UTC.
Alert 3 (Defender for Storage): “Access from an unusual location to a storage account” — the identity accessed the prod-data storage account at 03:22 UTC from the same US IP, downloading 2.3 GB of data using an account key.
Phase 1: Identity compromise analysis
Start with the identity — everything else follows from the compromised credentials.
| |
| Time | IP | Country | App | Resource | Risk |
|---|---|---|---|---|---|
| Mar 20 17:30 | 10.0.1.10 | GB | Azure Portal | Windows Azure SM | none |
| Mar 21 03:12 | 198.51.100.77 | US | Azure Portal | Windows Azure SM | high |
| Mar 21 03:14 | 198.51.100.77 | US | Azure Portal | Azure Storage | high |
Phase 2: Cloud resource abuse analysis
Now trace what the attacker did with the compromised admin credentials in Azure.
| |
| Time | Operation | Resource Group | Status |
|---|---|---|---|
| 03:14 | roleAssignments/write | — | Succeeded |
| 03:16 | virtualMachines/write | rg-brazil-temp | Succeeded |
| 03:17 | publicIPAddresses/write | rg-brazil-temp | Succeeded |
| 03:18 | networkSecurityGroups/securityRules/write | rg-brazil-temp | Succeeded |
| 03:22 | storageAccounts/listKeys/action | rg-production | Succeeded |
Phase 3: Data access analysis
The attacker listed storage account keys at 03:22. Check the storage account diagnostic logs for subsequent access using those keys.
| |
| Time | Operation | Requests | Downloaded | Containers |
|---|---|---|---|---|
| 03:22 | ListBlobs | 12 | 0 | [customer-data, financial-reports, backups] |
| 03:24 | GetBlob | 847 | 2.3 GB | [customer-data] |
| 03:38 | GetBlob | 23 | 145 MB | [financial-reports] |
Phase 4: Complete cross-product timeline
Combine all evidence sources into a unified timeline.
| |
| Time | Phase | Action | Detail |
|---|---|---|---|
| 03:12 | Identity | Sign-in: Azure Portal | US | Risk: high |
| 03:12 | DfC Alert | Unfamiliar sign-in properties | High | Initial Access |
| 03:14 | ARM | roleAssignments/write | Subscription level | Succeeded |
| 03:16 | ARM | virtualMachines/write | rg-brazil-temp | Succeeded |
| 03:16 | DfC Alert | Suspicious ARM operation | Medium | Persistence |
| 03:18 | ARM | NSG rule write | rg-brazil-temp | Succeeded |
| 03:22 | ARM | storageAccounts/listKeys | rg-production | Succeeded |
| 03:22 | DfC Alert | Unusual storage access | Medium | Data Access |
| 03:24 | Storage | GetBlob × 847 | 2.3 GB customer data |
| 03:38 | Storage | GetBlob × 23 | 145 MB financial reports |
Phase 5: Remediation and posture improvement
Immediate containment: Revoke the compromised admin account’s sessions. Reset password. Disable the account temporarily until the compromise vector is determined. Rotate all storage account keys for the prod-data account (invalidates the stolen keys). Delete the unauthorized VM in Brazil South. Revert the role assignment created at 03:14. Revert the NSG rule changes.
Data breach assessment: 2.3 GB from customer-data container + 145 MB from financial-reports. Determine whether the customer data contains PII (likely). Begin GDPR/UK GDPR notification assessment. The 72-hour clock starts from this determination.
Posture improvement: Implement conditional access policies requiring compliant device and trusted location for Azure Resource Manager access (prevents admin access from untrusted IPs). Implement Azure Policy restricting VM deployment to authorised regions only (UK South, West Europe — prevents Brazil South deployment). Replace storage account key-based access with Entra ID RBAC-based access (eliminates key theft as an attack vector). Enable Defender CSPM attack path analysis to identify similar paths to sensitive data stores. Enable PIM for admin roles (require just-in-time elevation, reducing the standing permissions available to a compromised identity).
Detecting lateral movement across cloud resources
Cloud lateral movement looks different from network-based lateral movement. Instead of moving from one server to another via SMB or RDP, cloud attackers move between resource types using the management plane: compromised identity → ARM API calls → access to storage, databases, key vaults, and other resources that the identity has RBAC permissions to reach.
To detect this, correlate Azure Activity Log operations with the identity timeline:
| |
| Caller | Resource Types | Count | Duration |
|---|---|---|---|
| admin@northgateeng.com | ["virtualMachines","storageAccounts","roleAssignments","networkSecurityGroups","publicIPAddresses"] | 7 | 26 min |
This query pattern — “single identity accessing multiple resource types in a short window” — is a powerful cloud-specific detection that you can codify as a Sentinel analytics rule for ongoing monitoring.
Structuring the cloud incident report
Cloud infrastructure incidents require a specific report structure that differs from M365 incident reports (Module 3.9). The key additions are resource inventory (which Azure resources were affected), ARM operation log (every management plane action the attacker took), posture gap analysis (which Defender for Cloud recommendations existed before the incident and were not implemented), and cost impact (for crypto mining incidents, the Azure compute charges incurred by the attacker’s VMs).
Report structure for cloud incidents:
Executive summary: what was compromised, what was the impact, and what has been done. “An Azure admin account was compromised. The attacker deployed a cryptocurrency mining VM (estimated cost: $340) and exfiltrated 2.45 GB of customer and financial data. The account has been contained, the unauthorized VM deleted, and storage keys rotated.”
Identity compromise analysis: how the identity was compromised, what access the identity had, and whether conditional access controls existed.
ARM operation timeline: every Azure management operation the attacker performed, in chronological order, with resource details.
Data exposure assessment: which storage accounts, databases, or other data stores were accessed, how much data was downloaded, and whether the data contains PII or regulated information.
Posture gap analysis: which Defender for Cloud recommendations existed before the incident that would have prevented or limited the attack. Map each gap to the specific attack step it would have prevented.
Remediation actions: what was done to contain the incident and what posture improvements were implemented to prevent recurrence.
From investigation to detection rules
Every cloud incident investigation reveals patterns that should become Sentinel analytics rules. The investigation in this subsection produced three detection opportunities:
Detection 1: VM creation in unusual region. An Azure identity creates a VM in a region not in the organisation’s approved list. Detection: scheduled rule that queries AzureActivity for virtualMachines/write operations and checks the resource location against an approved region list. This catches crypto mining deployments immediately.
Detection 2: Storage key listing followed by unusual access. An identity lists storage account keys (storageAccounts/listKeys/action) and within 30 minutes, the storage account receives access from a new IP. Detection: scheduled rule that joins AzureActivity listKeys events with StorageBlobLogs access events, flagging when the accessing IP does not match the listing identity’s IP.
Detection 3: Multiple resource type access in a short window. A single identity accesses 3+ different Azure resource types within 30 minutes from a non-trusted IP. Detection: the query from the lateral movement section above, scheduled as an analytics rule.
These detection rules are cloud-specific adaptations of the same detection engineering methodology covered in Module 15 (Detection Engineering).
Cloud-specific evidence preservation
Cloud infrastructure incidents require evidence preservation techniques that differ from traditional endpoint forensics. Azure resources can be modified, deleted, or redeployed — potentially destroying evidence if not preserved promptly.
VM disk snapshots. Before reimaging or deleting a compromised VM, create a snapshot of its OS disk and data disks. The snapshot preserves the filesystem state at the point of capture — including malware, attacker tools, modified configuration files, and log files. Store the snapshot in a separate resource group with restricted access (the investigation team only). This is the cloud equivalent of imaging a hard drive in traditional forensics.
Activity log export. Azure Activity Log has a 90-day retention by default. For incidents that may result in legal proceedings, export the Activity Log to a storage account or Log Analytics workspace with extended retention (up to 7 years). The Activity Log is the definitive record of every management plane operation — losing it means losing the evidence of what the attacker created, modified, or deleted in your Azure environment.
Resource lock for evidence. Place a “CannotDelete” resource lock on any Azure resource that is evidence in the investigation: the compromised VM (before snapshotting), the storage account that was accessed (the diagnostic logs are evidence), and any resources the attacker created (the crypto mining VM contains evidence of the attacker’s tools and configuration). Resource locks prevent accidental deletion by other administrators who may not know the resource is under investigation.
Network capture. If the compromised VM is still running and you need to capture network traffic for evidence (C2 communication patterns, data exfiltration to external IPs), use Azure Network Watcher’s packet capture feature. This captures traffic at the NIC level and stores it in a storage account. Start the capture before isolating the VM — isolation stops the traffic and prevents capture of the attacker’s communication patterns.
These preservation steps should be documented in your cloud incident response playbook and executed early in the investigation — before containment actions that modify or destroy evidence.
The five-phase approach — identity compromise → cloud resource operations → data access → unified timeline → remediation + posture improvement — parallels the BEC investigation methodology from Module 3.9. The data sources are different (AzureActivity, StorageBlobLogs, SecurityAlert instead of EmailEvents, CloudAppEvents, OfficeActivity), but the investigation logic is identical: trace the attacker's actions chronologically across all data sources, assess the damage, contain the threat, and fix the root cause.
Try it yourself
Using the attack timeline above, identify: (1) which response action is most urgent (hint: the storage keys are still valid), (2) which posture improvement would have prevented the initial access, and (3) which posture improvement would have prevented the data exfiltration even if the initial access succeeded. This exercise builds the defensive thinking that connects incident response to posture improvement.
What you should observe
(1) Rotating storage account keys is the most urgent — the attacker may still be downloading data using the stolen keys right now. (2) Conditional access requiring a trusted location for Azure Resource Manager would have blocked the sign-in from the US IP. (3) Replacing key-based storage access with Entra ID RBAC would have prevented the listKeys operation — the attacker would have needed Azure Storage data access permissions, which are separate from admin permissions and can be further restricted by conditional access.
Knowledge check
Check your understanding
1. The attack timeline shows the attacker listed storage account keys at 03:22 and downloaded 2.3 GB by 03:38. You discover the incident at 08:00. What is the most urgent containment action?
2. Which posture improvement would have prevented the entire attack even if the admin credentials were compromised?